69 results found for ""
- The Making Of Metrics
A developer’s view of the journey as we prepare to launch our newest Fabrik application to the world. Data - we all have it and it's our job as developers to try to figure out where to put it. Furthermore, businesses and teams are all trying to extract valuable insights from this data, which is easier said than done. At immedia, we are not exempt from this rule and have spent our fair share of time wrangling our data to try and highlight key insights for the clients of our Fabrik platform. Metrics, Fabrik's new analytics and data visualisation tool, was born from the desire to consolidate our previous efforts in surfacing our data into a single self-service portal that would not only present informative data, but surface it with expedition. As a developer who’s been instrumental in building Metrics with cost, scale and speed in mind, I’ve been taught many lessons along the way – and I’d like to invite you along for a glimpse into my journey so far. In the Beginning The wilderness awaits When evaluating the data in any system you will generally find an assortment of formats: structured, unstructured, text, tabular, binary, some API endpoint written by a person that left three years ago (and also didn’t care that you would be using it today) – a primordial soup, if you will. This wilderness that is set before you can seem daunting but, much like mowing your lawn after a year, the reward is well worth it when you smell the freshly-mowed grass or enjoy a languorously luxurious picnic between neatly arranged flower beds and carefully planted rows of trees. Metrics started with these questions: Where do we need to trim our lawn and neaten up the hedge? What flowers do we need to get; are roses the best choice for our climate and budget? Where will we place our short flower beds and our tall trees? And most importantly – who will be coming to the picnic? Or more simply, data is used to answer the 5 W's: who, what, why, when and where? For Metrics, whilst planning our garden, we came up with the following key questions that we wanted to answer in the initial release candidate: (who is)/(when are they)/(what are they using when)/(where are they when) listening to the live stream of the client? (who is)/(when are they)/(what are they using when)/(where are they when) using the client’s mobile application? when is the client’s application downloaded? As for the ‘why’ – while the subjectivity of that question and the requirement for verification of the various possible answers can halt a project before it even begins, sometimes, the answers quickly reveal themselves. For instance, what we’re currently observing with Metrics is that our clients’ live-streaming and engagement have seen an upward trend over the last few weeks, which would most likely be attributed to a population currently in lockdown during the COVID-19 pandemic. Find Your Source The source is within you – or, at least, somewhere Once our questions were established, it was time for us to identify from where our answers would come. As alluded to in the previous section: in any system, there will be multiple sources of data available and the selection of the source is a process. It may require some trial and error before you find a source that appropriately answers your question. We’ll skip over the boring stuff here and outline the sources that we eventually identified: Streaming Audio streams are served from HAProxy which provides us with configurable log output options. We used these to configure the logging to output what we need to answer our questions. We’ll get to how we parsed this information in a later section of this post. App Engagement How people use the services and engage with content is tracked via Matomo. Matomo provides a powerful API for retrieving the tracked data. What’s more, it provides our members with total privacy. Application Downloads App download numbers are retrieved via the Apple App Store Connect API and Google Cloud Storage API. Both provide us with files in CSV format. We’ll talk about how we used these in a later section. Laying the Groundwork Foundations are important Now that we knew what we were solving for, and from where we would be retrieving our answers, we needed to decide on which approach we would take for processing, storing and displaying the data. We vetted some options and finally decided to use Azure Databricks as our data processor with Scala as our data processing language. Azure Databricks provides us with an Apache Spark cluster that we can scale on demand to meet workloads. It’s also fast. Very. Very. Fast. For storage of our processed data, we identified Azure Cosmos DB and Azure Storage; Azure Cosmos DB for its ease of storage and retrieval of data (with a familiar SQL-like syntax) and Azure Storage for cost effective storage of files. The data in Matomo was already being stored in a MySQL database which we don’t need to query directly because Matomo's API already provides us with all the data we need. We would have a .NET Core API serving as the gateway between users and the stored data and an Angular application that would serve as the frontend. With the outline of our garden in place we felt confident that we would be able to tame the wilderness set before us and we were ready to get started planting and arranging our flower beds. THE PEOPLE BEHIND THE DATA A key part of building a tool that provides insights on how humans are using the tool, is being respectful of the humans themselves. Before we jump in to all the technical details, it is important to note that at immedia we hold the privacy and data rights of the people that use our platform in high regard. This means that we are always thinking about what needs to be done to ensure that data is properly anonymised before surfacing it to the people who use the platform. Metrics fully anonymises the data before it gets surfaced. Anything that can be used to identify a user is removed. For instance: when processing our streaming data we perform a one way hash of the IP address of the request before all of our data processing is performed. Furthermore, Matomo, our analytics engine, has user privacy baked into its design and also discards identifiable information as soon as it can. Streaming From logs to lines Our pipeline for importing streaming data works roughly as follows: Every hour a log file is rotated on HAProxy and uploaded to Azure Storage via the post-rotate hook. We read these log files into our Azure Databricks environment via a streaming query. The log files are processed, and relevant information is extracted and inserted into Delta Lake tables. Identifiable information, such as IP addresses are dropped before we write to Delta Lake storage. We have another pipeline that will run and create rollups of our data for use with frontend applications, which roughly works as follows: It calculates the peak number of listeners for all the newly created sessions per minute and saves the result to Delta Lake storage. It then creates rollups of all our specified periods and stores it in JSON format in Azure Storage – we’ll look at some examples of this soon. Lastly, we have a pipeline that will: calculate and store peak, total and unique listeners for different periods of time, and write the entries to Azure Cosmos DB. Exploring Streaming Results Summary Data Calculation of the summary data is merely done as an aggregate count or sum over the period of the rollup. For instance, 'Total Sessions' is calculated as a count and 'Total Days' is the sum of all streaming session lengths. Streaming Numbers Streaming numbers are calculated as an aggregate over the size of the granularity specified. In the graph above, 'Total Sessions' is the count of all listens per day, 'Total Unique Listeners' are the number of people who listened per day, and 'Peak Concurrent' is the maximum number of listeners for that particular day. These are stored in Cosmos DB which allows us to search and display arbitrary ranges. Streaming Numbers by Hour Streaming Numbers By Hour are calculated as an aggregate over the hour of day for sessions. This graph depicts the sum of all hours, the count of sessions streamed by listeners, the number of people who listened per hour, the count of sessions that were started, and the count of sessions completed per hour. Streaming Session Length Breakdown Session Length Breakdowns are calculated by using the Bucketizer class in Scala. We count the amount of sessions for every single duration ranging from 1 minute all the way to 18 hours. The frontend displays this as a pie chart, while the raw rollup data looks as follows: Summaries By Dimension Summaries By Dimension are calculated as a group by aggregate over session data. For instance, the 'Total Sessions' section lists the count of sessions grouped by each source, displayed from highest to lowest. App Engagement While statistics that describe how people use the Android and iOS apps is an exciting part of our data for our clients, it was much less exciting in terms of the data transformation work to be done. In essence, we query the Matomo API and display the data on the frontends. Luckily for us, Matomo did the heavy lifting in this regard and our biggest concern was displaying the data. App Downloads Our pipeline for application downloads works roughly as follows: An Azure Function App retrieves the CSV files from the Google and Apple APIs respectively. The function app does some slight preprocessing on the files and then stores them in Azure Storage. These CSV files are read into our Azure Databricks environment via a streaming query. Databricks does some processing on the data and writes the results to Cosmos DB. The result of this pipeline is that we can query App Downloads for any arbitrary period. The summary data is calculated by running an aggregate count query against our Cosmos DB container. The charts are rendered by querying Cosmos DB and displaying the entry of each day. The summary data is calculated by running an aggregate count query against our Cosmos DB container. The charts are rendered by querying Cosmos DB and displaying the entry of each day. Live Data Initially one of our goals was to surface data and surface it with speed. Up until this point we have only discussed static rolled-up data and the exploration thereof. Whilst this is useful for doing some rudimentary analysis after the fact, these stats are not able to tell our clients what is happening right now. In other words: we haven’t checked expedience off our list. To surface live data, we had to do some out of the box thinking. Processing log lines in real time wasn’t feasible as we only rotate the log every hour (unless of course you deem an hour ago as “live”) and we couldn’t really speed up the rotation. Live Stream Listeners For stream listeners there are two distinct types. HLS streams and Icecast streams and both of these required a unique approach to surface the live listener data. For HLS we wrote a .NET Core application that we deployed to our HAProxy server. This server checks the HAProxy Stick Table for the listener count of the tenant. We initially tried the HAProxy Stream Processing Offload Engine but this went bad – very bad – as it could not handle the amount of requests our servers were doing. In the end we got it working along the following lines: Our .NET Core application runs a command to check listener counts on the HAProxy Stick Table. It sends the listener count over Azure Event Hubs. An Azure Function picks up the event hub message and stores it in Redis (we keep about 2 hours of this data in Redis). We query Redis to show live stream data. Icecast was slightly simpler to retrieve the live listener count for, as it exposes administration endpoints that return XML data. The process is roughly as follows: An Azure Function imports the listener count from Icecast. We store the listener count in Redis. We query Redis to show the Icecast stream data. Live App Visitors & Engagement Retrieving the live app engagement numbers follows a very similar pattern to the Icecast listener count imports. We rely on Matomo’s reporting to achieve this. The process is roughly as follows: An Azure Function imports the current live visitor count and the actions taken in the last minute. These values are stored in Redis. We query Redis to show the App Visitor data. Events Prior to our Metrics application, Fabrik already had an Azure Event Hub to which it would report events. The reported events are retrieved via an Event Hub Listener. The process is roughly as follows: An Azure Function listens for events on the Event Hub. It stores the events in Redis. We query Redis to show the event data. The example above is during a relatively quiet hour – it can get crazy at time as seen in the screenshot below. The Road Ahead Growing our garden All things considered, the creation of Metrics has been quite a journey for us and we have learned a lot about what it takes to build a data pipeline that is cost effective and scalable. We aren’t planning on letting the weeds grow in our data garden over the next few months either - our clients have multiple new features to look forward to that are sure to teach us new lessons and provide greater insights into our data. Our mission to figure out what data we have and where we want to share it is a lot closer to being complete, but will never be completely so. We hope that we can keep delivering valuable insights to our clients and help you answer some of your operational questions going forward by delivering new insights to you. This journey is not yet over and we’re excited and ready to take on the future of Metrics.
- Smile 90.4FM becomes a trusted source for information in the midst of a global pandemic
As the scale and severity of the COVID-19 pandemic continues to escalate across the world, we are all being inundated with fake news and misinformation on social media and private groups, sending fear and panic rippling through communities who desperately need a reliable, credible source of news. Smile 90.4FM’s programming style of positivity and hope in the ‘Mother City’ and their positioning as the ‘amplifying the good news’ station means that the Cape Town-based radio station are uniquely positioned and already geared to provide comfort and clarity to their community during a daunting period. This offers them the opportunity to cut through the noise and quickly establish themselves as a trusted voice at a time when the need is crucial. A new status quo calls for new approaches Empowered by the Fabrik suite of software applications since 2016, the station had provided their closest listeners with a mobile app in which they could send and receive direct messages with the station, as well as listen to podcasts and the live broadcast – wherever they were. Since launch, the platform’s direct messaging functionality has helped the station become closer with their audience. With a broadcast message, all app members receive a push notification immediately alerting them about a new competition, survey or call-to-action – leading to rapid response and healthy engagement from Smile 90.4FM’s most loyal listeners. And by empowering listeners to send the station voice notes and text messages, the app gives every listener a fair shot at having their opinion heard without the burden of a big phone bill for SMSes or phone calls. During this particular phase of uncertainty, Smile stepped up to the opportunity of being of service to their audience. Through a dedicated messaging channel within their mobile app, they supplement news broadcasts with timely, relevant and factual COVID-19 updates that contain an added focus on the Western Cape province. “It was imperative that we provided a credible, relevant, succinct and immediate update to the pandemic,” says Naveen Singh, Programming Manager at Smile 90.4FM. “Our main role was to ensure that we engaged, informed, entertained, and emphathised with our audience across all platforms.” Powered by Fabrik’s secure, private platform, each registered member of the app is automatically added to this channel and flagged to be notified whenever a new announcement is received about the COVID-19 pandemic. Individual members are able to exit the channel of their own accord, or mute the channel if they prefer not to receive notifications. By setting the channel as ‘read-only,’ the station can post streams of news-related updates and links to useful resources, while keeping the noise of well-meaning community members forwarding other types of info that may not have been fact-checked prior to posting. To get the content out rapidly to the audiences across their various digital properties – the app, Facebook and Twitter – the News team make use of Fabrik’s Smashboard member engagement dashboard to easily disseminate breaking news or share pertinent announcements from live national addresses in real-time. A key iteration in Smile’s live on-air programming is the broadcast of President Cyril Ramaphosa’s nationwide addresses which are also streamed live via the Fabrik-enabled app an website streams, as well as interesting daily segments on Coronavirus-related issues that may affect their listeners. Those segments are made available immediately after broadcast as podcasts in the app and on the website. In addition, the station has incorporated daily on-air calls to action into every half-hourly news bulletin as well as daily promotions prompting listeners to subscribe to the new COVID-19 channel along with information about the COVID-19 hotline and hygiene practices. And to bring as much awareness as possible to how their audience can stay safe, the advertising billboards that appear when listeners first open the app are being dedicated to educating their community about protective measures related to the virus. Does being ‘of service’ improve engagement? To monitor how the consistent app-based COVID-19 updates are being received, the station has been tracking their audience’s response in real time via Fabrik’s Metrics dashboard – a reporting and analytics tool that empowers radio stations to view and monitor live engagement data. What Smile observed is that more of the station’s existing app members are opening the app more often, drawn in by the need to be kept informed by a trusted voice in their own community. Since the COVID-19 Updates channel was launched on the 12th of March, an average of 5 updates have been sent to members of the channel every day, keeping them informed on the effects of the pandemic on local communities and health authorities without becoming a source of spam. Over the course of the ensuing weeks, the team noticed interesting correlations between the ongoing updates and the frequency with which new and existing members have been engaging with the app. In March, the app was used almost 4 times more than in the same month a year ago by more than twice the number of members and, notably, the number of individuals who opened the app over 100 times – the station’s most loyal app-based audience - increased by 57%. The overall app audience continues to increase by hundreds of new members every week since the commencement of the amplified COVID-19 coverage – demonstrating a sustained appetite for quality, factual, real-time news notifications. What this could imply is that new app members are responding to an increase in on-air or social media calls-to-action or that the station’s existing app membership are sharing the app with their friends and family specifically to receive trusted, consistent COVID-19 updates. Come for the news, stay for the community In correlation with the increase in app engagement, stats show that the number of live stream plays increased over this period. It seems that, while reading up on the latest COVID-19 news, more people are digitally ‘tuning in’ to what’s being played on-air – a significant outtake. It’s undeniable that the global COVID-19 pandemic the world faces with has left many disheartened and searching for clarity in a confusing landscape of misinformation and fake news. Smile 90.4FM’s early investment in a multi-channel approach has empowered the station to rapidly surround their community with accurate, verified news and educational information about the pandemic, supported by the effective but simple-to-use messaging and audience engagement functionality within their Fabrik suite of services. What’s more, the station was able to track the impact of their ongoing COVID-19 updates in real time using Fabrik’s data visualising tools, which provided them a glimpse into the effectiveness of their initiatives as well as the impetus they needed to pull the community together through this challenging period.
- Maintaining scalable Cloud Systems in times of Unanticipated Peaks
How Microsoft Azure’s scalability and elasticity allowed Fabrik to respond quickly to a rapidly escalating increase in community engagement during COVID-19. When immedia initially set out to build our Fabrik platform – a suite of ‘born-in-the-cloud' audience engagement tools and workflows - our development team elected to adopt a cloud-first approach in its development, using Microsoft’s Azure platform. Some of the benefits we considered in selecting Azure include: cost savings through economies of scale and a tenant-based system with shared resources; the inherent scalability, redundancy and reliability of Microsoft Azure which enables our applications to be automatically adaptable to increases in resource demand; the ability to take advantage of the monitoring and analytics of Microsoft Azure so that we can diagnose issues quickly and report accurately. the ability to adopt an Agile approach when architecting and developing the platform so we can be responsive to our clients’ needs, as opposed to over-engineering a system based on superficial and assumptive requirements. As the Fabrik codebase became more established and our service offering continued to diversify, our technical teams were able to add more and more features to our mobile apps, APIs and web-based tools aimed at empowering our clients to better serve their members with ease. As a result, the number of individuals using the Fabrik suite of applications in our platform has increased dramatically over time, and so in turn have our infrastructure requirements, which include PaaS (Platform-as-a-Service) offerings such as ASP.NET APIs and Azure SQL Databases. With the onset of the COVID-19 pandemic in early 2020, our clients and their members heightened their communication with each other, leading to an increased reliance on their Fabrik applications. Our technical teams therefore anticipated a rapidly-escalating increase in traffic to our platform, more unpredictable network traffic into our system, and more strain on our APIs and databases. When this kind of demand occurs, the specific nature of the demand is difficult to predict and although we were confident that the Fabrik platform would perform based on Azure’s many elasticity and scalability services, in unprecedented and extraordinary times like this, our team made the call to be on high alert in case of any bottlenecks in our systems that might require manual intervention. The primary impact was to our backend database system which was developed on Azure SQL Databases using the Elastic Pools deployment model. The elastic pools served us well because of the dedicated amount of resources that were allocated for our databases, but when the demand drastically increased, the allocated resources needed to be expanded appropriately to cater for the demand. In addition, due to the number of people engaging simultaneously, our API gateway which is hosted on Azure App Services needed to be scaled out to more instances as well. The following statistics indicate the response times of some of our API calls during peak days: Using Application Insights, we were able to identify specific API calls in our system that were taking more time than expected to process. During the same period, the usage on our database was as follows: Each spike in database usage indicates occasions when our elastic pool was being maxed out. Our infrastructure team mitigated this by increasing the capacity of the pool to meet the usage demand on the platform. When our databases were strained, this impacted the response times of our APIs. App visitors would experience slowness when using some of the functionality in the apps. After analysing this usage pattern, we were able to identify areas we could improve to better cater for this kind of unpredictable usage. We identified specific API endpoints that needed to be optimised and certain areas of the system that needed reworking to take advantage of the serverless capabilities of Microsoft Azure. One of those services is the use of Cosmos DB for serving ‘chat’ related data, which will be beneficial in improving the overall load on our API. Cosmos DB allows us to separate reads and writes across multiple servers that are possibly distributed across multiple Azure region - as opposed to SQL Elastic Pools, where reads and writes happen on the same cluster. We are actively investing time in decoupling our APIs into a more microservice model, through the use of Azure Functions which, together with CosmosDB, will help in distributing the load on our APIs at a global scale. Scaling up the database tier took minutes to resolve the issues experienced by people using the apps. Throughout this entire scenario, we were able to take advantage of Microsoft Azure’s monitoring, alerting and log analytics capabilities which gave us access and visibility into the health of the system, highlighting areas that required attention and/or improvement at a glance. Looking ahead, there are always going to be improvements we can make to our platform and applications to better serve our customers’ requirements and the requirements of their members. The monitoring we have in place allows us to make more informed decisions that keep improving scalability, reliability and availability, and that are more anticipative of the areas we need to be scaling up and out in response to unpredictable and unforeseeable circumstances.
- Run smarter campaigns with Smashboard
Use Smashboard's updated campaign functionality to facilitate campaigns, search and filter entries, select winners and find content to play out on air! Your Smashboard engagement dashboard recently received a number of upgrades aimed at empowering you to facilitate campaigns and competitions more easily - all in one place! Here are a few new things you can do in Smashboard: Create an automated campaign response Once you've created your campaign using the Campaign Manager in Smashboard, you can now create a customised automated response that will be sent directly to all entrants via the Contact Us direct conversation within their app. In addition to using this feature to acknowledge the participant's successful entry into the campaign, your advertiser might like to book this Automated Response to send the participant more information about their offer, share their contact information, or prompt them to visit the advertiser's website. Pick a campaign winner You can now select your competition winner/s from a list of unique entries (the individuals who have successfully entered) or from a comprehensive list which may contain multiple entries per entrant. Once you've created the campaign, follow these steps: Find the 'Action' section of your campaign. Click the '...' symbol, which will give you a couple of options, then click 'Winners.' From there, configure your campaign to either 'allow multiple entries' from entrants or choose from a list of unique entries (one entry per entrant). Lastly, choose 'Pick Winner' and Smashboard will randomly select a winner for you! Exclude recent winners from your campaign Using the same process, you can also disqualify an entrant that has already won a competition within a specified timeframe by checking the box that says 'Disqualify previous winners' and then specifying the time period you'd like to exclude. Filter for competition entries Smashboard's new Campaign filter gives you the option to filter out all campaign entries or only show campaign entries. This is particularly useful when you're receiving many different kinds of engagement from your members to Smashboard, and you'd like to search for campaign entries which you can then select to play on air.
- Selizotholakala naphesheya kwezilwandle Izwi Lomzansi
Originally posted in Isolezwe. Selizotholakala naphesheya kwezilwandle isiteshi somphakathi saseThekwini esikhula ngesikhulu isivinini. Izwi Lomzansi FM elitholakala eDurban Station, izolo lethule i-App yalo okuwubuchwepheshe besimanje obusebenza ngeselula. Le App etholakala kwi-internet ebizwa ngokuthi Izwi98FM eyenziwe abe-Immedia. Umphathi wezinhlelo kulo msakazi uFuthi Khumalo, uthe ucwaningo lwabo luveze ukuthi sebewu-43% abalaleli beZwi Lomzansi asebebalalela kule App eqale ukuba khona kuleli sonto. Izolo bebeyethula ngokusemthethweni. EGoli kuvele ukuthi yisona esihamba phambili ngoba iningi labantu abahambela iTheku balalela lo msakazo ngesikhathi belinde amabhasi nezitimela esiteshini. “Lolu hlobo lobuchwepheshe bese kuyisikhathi sokuthi silusebenzise ngoba abantu abaningi bakhala ngokuthi kukhona lapho umsakazo ungafiki khona. Abalaleli abaningi baseGoli besize sixhumane nabo ngokusebenzisana nomunye umsakazo. Uma sesitholakala kwiselula, kuzoba lula kakhulu ukuthi sixhumane nabo nakwezinye izifundazwe. Okuhle ukuthi ngisho abantu bephesheya kwezilwandle bazoqhubeka nokuzithola zonke izinhlelo zethu baphinde baphawule nangazo kalula,” kusho uKhumalo. Umphathi siteshi, uVela Xulu, uthe inhloso yabo enkulu ukwenza lo msakazo usebenze njengeminye emikhulu yize kungowomphakathi. “Sifuna ukuzakhela amathuba emisebenzi sibenzise intsha eyazi kangcono ngobuchwehesheye, ezokumaketha, ukusakaza nezokuxhumana. Noma siwumsakazo omncane kodwa akukho esifuna abalaleli bashode kukho. “Esikhathini samanje zonke izinto zenziwa ngobuchwepheshe yikho sinyusa izinga,” kusho uXulu. Omunye wabasakazi oseneminyaka kulesi siteshi uNontobeko Mbelu, uthe ukufika kwale App kwenze indlela yokuxhumana nabalaleli yalula kakhulu. “Konke sesikwenza ngekhasi elilodwa abalaleli sebeyakwazi ngisho ukusithumelela ama-voicenote,” kusho uNontobeko. Omunye wongqondongqondo bale App, * -Anice Hassim we-Immedia ebhekelele ukusebenza kwale App, uthe ayikho imininingwane ezoduka. UHassim uthe konke kuzoba lula kubalaleli ngoba bazokwazi nokulalela umsakazo noma bengekho emakhaya. “Umuntu uzokwazi ukuqopha izinhlelo azithandayo abuye azilalele uma ethola isikhathi. “Uma kukhulunywa ngesihloko wangezwa kahle, uzokwazi ukuphinde ubuyele emuva ulalelisise,” kusho Hassim.
- What holds us back from an insight-driven approach to business?
On 14 May 2018, our comments from a Deloitte Risk Advisory panel on Data Analytics on the place of analytics in driving business decision-making were published in a Deloitte.co.za blog post, republished here for your reading convenience: The practice of using data and analytics to deliver relevant and timely information to drive business decisions is still not pervasive enough in South Africa – why is this? Is it a lack of understanding of what is possible, weak leadership, poor data, legacy systems or simply a lack of strategy? Perhaps all of the above contribute. Perhaps leaders have become sceptical about investing in data and IT without experiencing the promised financial returns. Essentially, analytics professionals are simply not demonstrating tangible business value! Increasingly analytics, specifically Artificial Intelligence and Machine Learning, is discussed in numerous mainstream publications and platforms with stories relating to the remarkable achievements in the world of innovation and new business models such as those of Uber and Airbnb. It would be inspiring to have a South African example. While discussions around analytics have become pervasive in boardrooms, actions speak louder than words, and there is a lack of evidence of insights driving decisions in day-to-day business. At Deloitte Risk Advisory’s first Data Analytics gathering titled “Analytics in conversation”, we discussed and debated what it takes to become an insight driven organisation in South Africa. The objective of this new forum is for business leaders and analytics practitioners to unpack and debate the issues that we face locally around data and analytics in business. Deloitte Data Analytics will host future gatherings and leaders from all industries and areas of specialisation are welcome to join. The goal is to solve our challenges here in South Africa, while at the same time building a collaborative professional network of people with complementary expertise, experience and most importantly passion. The participants of the first Data Analytics gathering offered a number of reasons as to why we might be falling behind in South Africa: Fear: While analytics terminology is increasingly common, people are intimidated by their minimal knowledge. They lack an understanding of how the insights are derived and how the output can be utilised in their daily businesses. If something seems like magic then it is difficult to trust. In addition, there is a perception that sophisticated analytical solutions might replace jobs, which only adds to their apprehension. Communication and skills: Analytics is a team sport; it requires business, IT, data, mathematics, statistics and storytelling skills. In the absence of the context of the business problem, the technical skills to develop the data and analytics solution, as well as the adaptation of business processes to consume the output, the financial benefits of analytics will never be recognised. While there are often pockets of analytics excellence within an organisation, the output is not imbedded into a process where it can be used and acted upon in a timely manner. Analytics and operations currently are two separate functions, which means that business problems are not resolved with data and information. Culture: In our current economic environment in South Africa, people often feel vulnerable which can lead to resistance in experimenting with new ways of working. We gain comfort in operating in the “traditional business as usual” model rather than running the risk of an unsuccessful new initiative. This culture inhibits change and innovative thinking. Expectations: In our personal lives, we expect instant and relevant responses; if our social media does not update within seconds then we become disgruntled; if we receive an offer that is not relevant to us then we lose interest. We manage our exercise schedule by the instructions from our fitness device! However, in our professional lives, we are satisfied with manual and lengthy processes that deliver old and irrelevant information. Data and IT: Often the data and IT systems prohibit the timely delivery of insights. Poor quality data that is stored in silos across the organisation coupled with inadequate data management tools make the analytics process long and frustrating. Strategy and Leadership: The executives do not formulate and drive the analytics strategy; hence, there is a lack of focus, investment and commitment. The solutions to these challenges are multi-faceted but the Data Analytics discussion suggested four fundamentals that are required for change: Data needs to be treated as the lifeblood of the organisation. Employees at all levels require education around what analytics is, why it is important, how it can drive competitive advantage and most importantly how it benefits each employee. Analytics teams must demonstrate and deliver tangible value by solving relevant business issues. It is vital to empower cross-functional teams to collaborate and experiment. Executives must create the vision as to what is possible and then drive a strategy to become insights driven. The focus must be on investment, change management and people to make it happen. This will create communication, imagination and innovation. Analytics is an enabler to capturing institutional knowledge in a country that is short of skills. Analytics, in the right business environment, can track consumer sentiment, build customer loyalty, gain competitive advantages and make more effective business decisions. While Deloitte’s first Data Analytics forum raised more questions than answers, there was one overarching message – analytics is already part of business and those who it do properly will survive, compete and thrive. The Data Analytics forum is the beginning of a constructive discussion in the South African context around data and analytics that will help business start talking the same language across functional barriers of Business, IT, Finance and Analytics, to knowledge for the benefit of all employees, consumers and businesses. We need to become fanatical about developing solutions that are applicable, digestible and useable. Writer: Dr Tracy Dunbar Associate Director at Data Analytics Deloitte South Africa firstname.lastname@example.org Contributors: Anice Hassim, Carl Wocke, Danny Saksenberg, Selene Shah and Phil Molefe.
- Say Hello to Community Centre
Use Community Centre to view real-time engagement, administer your app membership & configure your messaging settings. From day-to-day operations such as membership application and member suspension to managing a conversation or channel within your Fabrik-powered app, Community Centre has been designed to be the new go-to tool for community management! Here are a few ways you can start to use our newest addition to the Fabrik suite of applications: Administer your app membership On your Community Centre dashboard, you now have the ability to view profile information about each registered member of your community, approve or reject their application, or assign trusted privileges to members. Configure your Messaging settings You’ll also be able to configure the settings of the Messaging functionality within your app by adding new conversations or channels, editing or removing existing ones, or managing particular members within each group. View engagement metrics in real time Via Community Centre, you will be able to access your Fabrik Metrics dashboard (which is currently in Preview) – a real-time, data-driven system that empowers your stakeholders with the ability to observe and understand your audience’s engagement, instantly visualise your data and immerse into audience’s behaviour in real time.
- Durban adds to the Fabrik
Originally posted in the Sunday Tribune.
- Advertising Campaign Improvements
Some new features to the Campaigns functionality in Smashboard. New Features Added option to export messages to CSV Improvements Replaced action buttons with popup menu Winner Selection Reworked winners selection to pull message from elasticsearch Hidden previous winners title when there is no previous winner Bug Fixes Fixed the broken css for winner's selection modal Positioned the title of the previous winner below the unconfirmed winner Fixed the incorrect engagements count for each campaign
- What's New in Streaming
This month, we’ve developed and improved features that will allow our audio broadcast clients to offer your audience a better podcasting and live-streaming experience! We've launched a new podcast widget that can be embedded on your website. This will display all the podcasts you've recorded through Echocast in one easy place for listeners to find on your site. We've improved our streaming options and we can now offer a more efficient 64kbps livestream. Your listener will experience nearly the same audio quality but spend half the data! In addition both the livestream widget and the podcast both support pre-roll advertisements!