| LACO
| LACO

Conversational BI: the AI‑powered future of data intelligence

Dashboards used to be the answer,until business users started asking better questions. In fast-paced environments where decisions can’t wait for the next reporting cycle, static views just don’t cut it anymore. Users expect to engage with data as naturally as they would with a colleague: by asking a question and getting a clear, useful answer.

Conversational BI makes that possible. By combining large language models (LLMs) with governed data platforms like Microsoft Fabric and Microsoft Azure, it turns raw data into on-demand intelligence. Less digging, more deciding.

The challenge

Traditional data intelligence environments were built to answer known questions through pre-defined dashboards and reports. That worked until business needs changed faster than reports could be updated. As data volumes grow and decision cycles shorten, teams no longer want to wait days for a new dashboard. They want to ask ad-hoc questions and get answers, instantly.

Meanwhile, data teams are buried in repeat requests: small tweaks, new views, slightly different filters. Time that could be used for value-added analytics is lost to backlog and maintenance. Despite all the tech in place, the experience often feels rigid and slow.

The solution

Conversational BI redefines how people interact with data, using AI to deliver fast, contextual answers in natural language. Instead of clicking through a forest of dashboards, users can simply ask, “How did revenue evolve last quarter by region?”, “Which products are driving margin decline?” or “What changed in churn after the price update?”.

Here’s how it works in a modern Microsoft Azure and Microsoft Fabric environment:

  • Governed data as the foundation: It all starts with a strong, governed data model. Microsoft Fabric, Azure and lakehouse structures expose clean, curated datasets with business-friendly entities like customers, products or orders. Centralised rules around quality, lineage and security ensure that answers are trustworthy.
  • LLM-powered intelligence: Large language models interpret the user’s question, map it to the correct metrics and dimensions, and generate the necessary queries. They summarise insights, highlight trends, and suggest follow-up questions — even visualising results in tools like Power BI. The outcome is not just data, but narrative: a story users can act on.
  • Built-in governance and control: Conversational BI doesn’t bypass governance, it builds on it. Role-based access, row-level security and built-in guardrails ensure users only see what they’re allowed to see. AI responses are explainable and traceable, with previews and validation tools that help data teams monitor, review and improve the system over time.

Together, these elements allow organisations to embed AI-driven experiences directly into the tools their people already use. From embedded chat to smart search, Conversational BI makes data as accessible as a conversation while keeping full control behind the scenes.

The results

For business users, conversational BI feels like having an analyst on standby. Questions that used to take days now take minutes. The data becomes more accessible, decisions become faster, and insights are easier to trust because they’re delivered in plain language.

For data teams, the shift is equally powerful. Instead of acting as dashboard factories, they focus on governance, modelling and quality, the building blocks of a trusted data environment. The result: fewer ad-hoc requests, less firefighting, and a more scalable approach to analytics.

At organisation level, decision-making becomes more democratic and consistent. When more people can safely ask better questions and actually understand the answers. Data intelligence evolves from a reporting system into a strategic conversation partner.

The future of BI isn’t just visual. It’s conversational.

Ready to explore Conversational BI?

LACO helps you design and implement AI-powered conversational BI, combining LLM capabilities with governed data so your teams can ask questions in plain language and get trustworthy answers when they need them.

Conversational BI: the AI‑powered future of data intelligence2026-01-15T10:17:12+00:00

Marketing mix modelling

Marketing leaders today face a growing challenge: deliver measurable results in a landscape that’s more complex, fragmented and regulated than ever. Budgets are under pressure, while customer journeys span more channels and more blind spots than before.

With traditional tracking becoming less reliable, it’s harder to understand what’s truly driving performance. That’s where marketing mix modelling comes in. By connecting the dots between campaigns, spend and business outcomes, it brings clarity back to decision-making and replaces assumptions with insight.

The challenge

Today’s marketing landscape is a paradox: more channels, more data, yet less visibility. Tracking customer behaviour has become harder thanks to GDPR, strict consent rules and the looming end of cookies. At the same time, marketing spend is spread across a growing mix of online and offline touchpoints, making it difficult to see what’s really working.

Traditional tracking methods fall short. ROI and performance are often judged by what’s easiest to measure — last-click metrics, web analytics or internal assumptions — rather than by a complete, objective view. The result: fragmented insight, unclear attribution, and growing pressure on marketing leaders to justify budgets without solid evidence.

The solution

LACO helps organisations cut through this complexity with an AI-powered marketing mix modelling (MMM) approach, built on a robust Microsoft Azure and Microsoft Fabric foundation. Rather than relying on user-level tracking, MMM uses advanced statistical and machine learning techniques to connect consolidated marketing inputs with business outcomes like sales, leads or conversions.

  • It starts with the data. We build a secure, scalable platform on Azure to integrate all relevant marketing inputs — media spend, CRM, web and app analytics, and external data like seasonality or macro-economic indicators. The result: a centralised, governed environment that’s consistent, traceable and ready for modelling.
  • From there, LACO defines a unified marketing data model that brings together online, offline and contextual factors. Search, social, display, TV, radio, events. All channels are harmonised into one view, enabling consistent comparisons and meaningful insights.
  • We then apply AI-powered MMM models that estimate how each driver — from media spend to timing to external influences — contributes to business results. These models are deployed directly into the Microsoft Azure or Microsoft Fabric environment and made accessible via tools like Power BI, so marketing and business teams can explore insights independently. Clear visuals and practical explanations show how spend, saturation and timing affect performance.

A key feature?
Scenario planning. Decision-makers can run what‑if analyses via a conversational interface that acts as an AI agent for the marketing organisation to test budget shifts, channel reallocations or campaign timing. What happens if we boost spend on social and cut TV? Launch a promo earlier? Shift regional targeting? These simulations support smarter, evidence-based decisions before money is spent.

Crucially, the solution is transparent and privacy-friendly. Instead of black-box algorithms built on personal data, LACO’s MMM approach relies on aggregated, governed inputs. Assumptions, sources and model logic are documented, building trust and ensuring compliance, even as regulations evolve.

Finally, MMM models feed into a predictive ROI engine that forecasts the impact of future marketing investments. Budget planning becomes proactive, grounded in data rather than gut feeling. Finance and marketing teams get forward-looking guidance on where to invest, at what level, and with what expected return.

The results

With AI-driven MMM on a strong Microsoft Azure and Microsoft Fabric data platform, organisations move from reactive reporting to confident, evidence-based marketing decisions. ROI attribution becomes transparent across all channels, enabling smarter budget allocation and better performance, often without increasing spend.

Forecasting improves, planning becomes more strategic, and marketing finally earns its place as a measurable growth driver. Because the entire approach is built on governed, aggregated data, organisations stay agile and compliant even as privacy rules continue to change. The end result? Clarity. Control. And a marketing function that moves to knowing.

Ready to bring clarity to your marketing mix?

LACO helps you build an AI-powered marketing mix modelling framework on Microsoft Azure and Microsoft Fabric, turning fragmented marketing data into transparent ROI insights and forward-looking scenarios for smarter budget decisions.

Marketing mix modelling2026-01-15T10:15:27+00:00

AI readiness: are you solving the right problems?

Everyone’s talking about AI. From the boardroom to the breakroom, it’s being pitched as the next big thing. But between the buzzwords and bold promises, one question often gets lost: are we solving real business problems, or just playing with shiny tools?

For many organisations, the excitement around AI has led to rushed pilots, unclear goals and underwhelming results. Without a strong link to decisions, data and adoption, even the most promising AI use cases fall flat. It’s time to get practical.

The challenge

AI has officially made it to the boardroom. It’s no longer just a pet project for innovation labs or data scientists with too much time on their hands. But as the hype grows, so does the risk of missing the point. Too many organisations still approach AI as a technology exercise, not a business change.

Ambitious projects take off — generative AI, copilots, predictive models — but without a clear link to decisions, processes or outcomes. Budgets get eaten, time disappears, and valuable expert capacity is spent on pilots that never leave the lab.

The most common pitfalls? Unclear business relevance, poor data quality, and solutions that simply don’t fit how people actually work. Somewhere along the way, teams discover that the required data doesn’t exist or isn’t reliable, that the Azure or Fabric setup can’t support what’s needed, or that users don’t trust — let alone understand — the AI output.

The result? A trail of disconnected pilots, sceptical stakeholders and the growing sense that “AI is expensive and doesn’t deliver”.

The solution

Before you build, pause. LACO uses a practical, structured framework built around three deceptively simple questions:

  1. Does it truly matter for your business?
    Start with problems worth solving. Together with business and IT stakeholders, we identify the decisions and processes where AI can actually make a difference. Is the goal to reduce manual work, improve forecasting, detect risks earlier or personalise customer interactions?By defining what success looks like – fewer errors, shorter lead times, higher conversion, lower cost – we make sure the initiative is anchored in business priorities, not technology curiosity.
  2. Do you have the data and platform to make it work?
    Next, we assess the current data landscape and platform readiness. Are the required data sources available, reliable and governed? Can they be integrated into your Azure or Fabric environment? Are performance, security and cost manageable?This step covers everything from data models and pipelines to monitoring and lifecycle management. The goal: ensure every use case is grounded in a solid, scalable foundation.
  3. Will people actually use it?
    AI that nobody uses is just an expensive demo. That’s why we consider adoption from day one. Who will use the solution? How will it impact their daily work? What’s needed in terms of transparency, controls and training?We think through user journeys, interfaces (Power BI, Fabric, apps…) and guardrails, so AI becomes part of the process and not something bolted on as an afterthought.

These three questions address business value (viability), data and platform readiness (feasibility) and user adoption (desirability). LACO applies this framework through focused workshops and assessments, always grounded in your existing Microsoft Azure and Microsoft Fabric setup.

The outcome? A shortlist of AI use cases that are technically achievable, strategically relevant and supported by the data and governance to actually succeed.

The results

Organisations that apply this framework move beyond experimentation. Instead of spreading resources across disconnected pilots, they focus on a small number of use cases with real business impact. Each initiative is backed by the right data, a scalable platform and clear outcomes — turning AI from a theoretical exercise into a strategic tool.

By building on existing Microsoft Azure and Microsoft Fabric components, time to value is shortened and adoption becomes easier. Business users are involved from the start, ensuring trust, usability and relevance. The result? A portfolio of AI solutions that deliver measurable value and actually support day-to-day decisions.

Ready to assess your AI readiness?

LACO helps you separate hype from real opportunity by mapping your business priorities, data readiness and Azure / Microsoft Fabric platform capabilities. Together, we identify where AI can create tangible value today — and where it should wait.

AI readiness: are you solving the right problems?2026-01-08T11:51:16+00:00

Which AI model fits your business need?

Artificial intelligence (AI) is everywhere. AI algorithms power fraud detection systems, automates customer conversations, analyses medical scans, generates content, and supports decisions at every level of an organisation. Despite this growing presence, the term “AI” is often used without much precision. It serves as a label for a wide range of technologies that operate in fundamentally different ways.

The starting point is not the technology itself. It is the business challenge. Once the problem is clearly defined, along with its context and constraints, it becomes possible to assess which type of AI offers the best solution. A well defined business challenge can help choose the correct AI toolings.

This article provides a structured overview of today’s AI landscape. It explains the major types of AI solutions in practical terms, showing what they are, how they work, and where they fit. The goal is not to hype any particular method, but to help organisations make informed choices. Finally, you will find a practical framework for selecting the right approach based on your business’s specific needs.

Different AI models explained

1. Rule-based systems: when logic is enough

Some problems do not require learning or prediction. They simply require a logical structure. Rule-based systems are the most straightforward form of artificial intelligence. They operate through a predefined set of logical instructions. If the condition(s) is met, one or more specific action follows. The system is not required to adapt itself or improve over time. It behaves consistently, based entirely on the rules written by humans.

These systems are ideal when business processes are stable and clearly defined. They are often used in eligibility checks, validation tools, compliance workflows or basic automation. Since there is no training involved, they do not require data science expertise or historical trends.

In contrast to machine learning or Generative AI, rule-based systems do not rely on pattern recognition or large datasets. They are easy to audit and explain, which makes them useful in sectors where traceability and consistency are paramount. However, their biggest strength is also their biggest limitation. They cannot handle ambiguity, complexity or variation well. When things change or when data becomes unpredictable, rule-based systems break down quickly.

2. Traditional machine learning: recognising patterns in data

When the decision logic is complex to write manually but patterns exist in historical data, machine learning becomes a more suitable option. In a typical machine learning workflow, a business appropriate model is trained using a dataset that includes both input variables and known outcomes. The system then learns to associate certain input combinations with specific results and then applies that understanding to make predictions on new data that it has not seen before.

This approach is widely used in applications such as churn prediction, demand forecasting, customer segmentation or fraud detection. The models work best when the data is structured and clean, meaning it is presented in well-organised tables with consistent fields. The algorithms rely on mathematical relationships and statistical reasoning rather than human-defined rules.

Traditional machine learning, sometimes referred to as classical AI, differs fundamentally from newer forms like Generative AI. It does not create content or simulate conversation but instead focuses on extracting predictive value from structured input data. Unlike deep learning (see next section), traditional machine learning typically produces models that are more interpretable. Decision trees, linear regressions and support vector machines, for instance, offer insight into how the model reaches its conclusions. One can follow the path of decisions to see how each input affects the output. This balance between accuracy and explainability makes traditional machine learning a reliable option for many business cases, especially where regulation or stakeholder trust requires transparency.

3. Deep learning: mastering complexity

Some tasks are simply too complex for traditional models. When dealing with unstructured data such as images, video, sound or natural language, traditional models hit their limitations. In such scenarios deep learning becomes essential. Deep learning relies on neural networks with many interconnected layers. Each layer extracts increasingly abstract features from the data. These deep neutral networks are capable of recognizing nonlinear patterns that humans would struggle to describe explicitly.

Deep learning has enabled major advances in areas such as speech recognition, medical image analysis, real-time translation and autonomous vehicles. It is also the foundation for many Generative AI systems, including large language models and image synthesis tools. Unlike traditional machine learning, deep learning does not require manual feature selection, as the model discovers the relevant patterns on its own during training. However, this also makes deep learning models harder to interpret. They behave like black boxes, producing accurate results without offering much insight into the chain of reasoning behind them.

Training these models demands large volumes of labelled data and significant computational power. The development process is more complex and more resource-intensive than traditional methods. Yet the performance gains can be substantial when the right data and infrastructure are available.

4. Foundation models: general intelligence at scale

Foundation models are large-scale deep learning systems trained on vast and diverse datasets, both structured and unstructured, across many domains. Unlike task-specific models, foundation models are designed to perform a broad range of functions. They are not built for a single goal such as predicting sales or segmenting customers. Instead, they are built to understand language, generate content, answer questions and complete tasks in a variety of contexts and processes.

These models include well-known large language models (LLMs) like GPT or PaLM, and are often grouped under the label of Generative AI. They are trained once and then reused across many use cases. This makes them extremely efficient for organizations that want to explore different types of AI capabilities without building custom models from scratch. The general knowledge encoded in a foundation model allows it to understand prompts, summarize documents, generate responses and even reason over open-ended questions.

However, foundation models are not perfect. As they are trained on publicly available data, they can reflect biases when answering on certain contexts, produce inaccurate statements or fail to grasp the nuances of specialized domains. They are broad in scope but shallow in domain expertise. Their outputs are based on statistical likelihood of words following the previous words and not on verified facts. For critical business tasks, that distinction is crucial.

5. Fine-tuning foundation models: adapting general AI to specific needs

To make foundation models more useful in specific environments, organizations can fine-tune them. Fine-tuning involves retraining a general model on smaller, more targeted datasets that reflect a particular industry, tone or process. This enables the model to use its broad general knowledge more effectively within a defined business context. In machine learning terms, this process is known as domain adaptation.

Foundation models serve as base models, which can then be adapted to support specialized tasks without having to rebuild everything from scratch. For example, a healthcare provider might fine-tune a model using anonymised patient records to improve its ability to summarize medical reports. A legal team might refine legal terminologies to reduce ambiguity during a contract review. A retailer could adjust tone and vocabulary to match brand guidelines in their automated communications.

Fine-tuning offers a practical middle ground. It avoids the cost and complexity of building a model from zero, while still offering more precision and relevance than an off-the-shelf tool. However, the process requires expertise in prompt design, data preparation and human evaluation. In some cases, the customised version might make more mistakes or lose the broad knowledge that made the original model useful. So even though fine-tuning is meant to improve performance, doing it poorly can lead to worse outcome, and the general version of the AI might actually work better.

6. Custom models: building from the ground up

Some organizations have access to high-quality proprietary data that gives them a competitive advantage. In those cases, it may be more useful to build a custom model from scratch — a route often referred to as developing a proprietary AI model. This involves collecting relevant data, defining a clear objective, selecting appropriate algorithms, training the model and integrating it into production systems.

Custom models provide full control over inputs, outputs and behaviour. They are especially useful in cases where off-the-shelf tools or general-purpose models fail to capture business-specific nuances. They also offer greater flexibility in adapting to changing requirements or integrating with existing systems and workflows of the organization.

However, custom development is not a light decision. It demands significant time, talent and financial investment. Success depends on both data maturity and operational readiness. While building proprietary AI can offer long-term differentiation, it only makes sense when the business case clearly outweighs the cost of adapting an existing solution. When done well, it results in models that are deeply aligned with organizational goals and difficult for competitors to replicate in short term.

Explainable AI: clarity as a design principle

In some domains, being right is not enough. Organizations must also be able to explain how and why an AI system reached its conclusion. This is where explainable AI, often referred to as XAI, comes in. It refers to the use of models and techniques that prioritise interpretability alongside performance.

This can be achieved by choosing inherently simple models such as decision trees, or by applying post hoc explanation tools that provide insights into more complex models. In both cases, the goal is to provide decision-makers with understandable justifications that go beyond model outputs.

Explainability is essential in various sectors, particularly those that are regulated or high stakes such as healthcare, finance, insurance and government. It also plays a key role in building trust among users and stakeholders. In many situations, an understandable model that performs slightly less well is preferred over a high-performing black box.

AI act: risk based assessments

On top of all these, we need to be compliant of the AI act that is being enforced in phases starting August 2024. The AI Act, introduced by the European Union, is the first major legal framework designed to govern the use of artificial intelligence.

It takes a risk-based approach, meaning that the level of oversight depends on how an AI system is used and the potential impact it may have. Companies will face strict requirements related to transparency, accuracy, and human oversight. Systems deemed unacceptable, like those that manipulate behaviour or use social scoring, will be banned entirely. For companies developing or deploying AI, the Act brings new responsibilities, including documentation, testing, and ongoing monitoring. For users and the public, it offers greater protection, aiming to ensure that AI serves people in a fair and trustworthy way.

Which AI model is the right one for my use case?

By now, the key takeaway should be clear. Artificial intelligence is not a single system, and choosing the right approach depends entirely on what you are trying to achieve. Every AI model comes with trade-offs. Simpler AI models such as rule-based systems or traditional AI techniques offer clarity and control. More complex solutions like deep learning or Generative AI provide power and flexibility, but often at the cost of transparency. General-purpose foundation models are quick to deploy and highly versatile, while custom models offer a near perfect fit with business-specific requirements but require significant investment for development.

To help navigate these choices, LACO developed a practical decision tree that maps the reasoning behind each AI option. It connects your use case characteristics with the most suitable technology, whether that is a fine-tuned large language model, a classic supervised learning algorithm, or a rule-based automation. The tool is designed to support teams in making grounded, informed decisions based on data availability, business goals and regulatory expectations.

You can download the full decision tree here and use it as part of your internal planning or evaluation process.

Pros
  • Relatively flat organization.
  • Informal data governance bodies.
  • Relatively quick to establish and implement.
Cons
  • Consensus discussions tend to take longer than with a centralised model.

  • Many participants, which can compromise governance bodies.
  • May be difficult to sustain over time.
  • Provides the least value.
  • Difficult to coordinate.
  • Business as usual can interrupt data governance.
  • Issues around data co-ownership and accountability.

The fight for a better planet

Next to people, LACO is also focused on the fight for a better planet. Using cutting-edge data science technologies, we worked out a sustainable business approach for the aviation industry to combat food waste and the wasted fuel to transport 6.1 million tonnes of unnecessary food. Check it out!

Which AI model fits your business need?2026-02-16T08:37:33+00:00

5 key takeaways from SAS Innovate that could reshape your AI strategy

SAS Innovate is a showcase of how SAS tools can be used to build AI-driven applications that reshape how businesses harness data and AI.

It’s also a chance to gain insights and knowledge that simplify and enhance data and AI ecosystems, save businesses time and make it easier to make the right decisions.
From synthetic data to hands-on experimentation, here’s what stood out, and what it means for your organisation.

Takeaway 1:
Synthetic data in AI has potential, but its value remains to be proved

Synthetic, AI-generated data allows for analysis without using real information, keeping data blinded and secure. It’s used for a wide range of purposes, like:

  • fraud detection
  • cybersecurity monitoring
  • policy development
  • law enforcement
  • economic analysis
  • clinical trials
  • worker safety
  • quality control

It preserves the statistical properties and patterns of the original data and can increase the volume of minority data, enabling models to be better trained to recognise this group.

Augment & generate data

This is what the SAS Data Maker does. It unlocks the potential of existing data using a low-code/no-code interface to quickly augment or generate data.

While it can be very useful for specific sectors with strict data privacy requirements, its practical value will really depend on pricing and packaging, especially when compared to more DIY solutions using custom scripts and blinded datasets. Our verdict: watch this space.

Read more: How to create an impactful data governance policy

Takeaway 2:
We’re quickly moving towards smarter automation, safer workplaces & sensitive chatbots

Several practical AI innovations are happening within SAS. A highlight from the showcase was the introduction of an SAS custom step in Viya — an easy-to-use, modular feature accessible to both SAS and non-SAS users, with use cases like auto-documenting code and supported by a public GitHub repository.

Agentic AI addresses hallucinations in AI-generated responses by using tools like SAS Model Manager to track prompts and available language models.

Guardrails for chatbot interactions use sentiment analysis to handle sensitive scenarios like missed payments or emergencies, and can automatically derive actionable steps from conversations.

Lastly, event stream processing is used to detect and prevent workplace incidents by monitoring CCTV footage for safety compliance (e.g. missing PPE) and alerting relevant teams in real-time.

Takeaway 3:
SAS Viya Workbench is a big step forward

The SAS Viya Workbench is a coding interface for developers that appears to be a solid move by SAS with lots of potential. What are the benefits?

It allows users to work with SAS, Python or R in a notebook way of working.
It focuses on self-service, enabling developers to allocate the resources they need and choose their preferred IDE.
It’s a standalone tool that doesn’t require additional SAS Viya or SAS 9 licences.
Improvements coming in Q2 and Q3 include customisable environments and batch scheduling.

Stepping up

The Viya Workbench feels like a promising step forward. With Microsoft setting strong examples like Databricks, Fabric and Synapse, it’s encouraging to see SAS stepping up with similar products. If development continues, Viya Workbench has the potential to become a widely adopted multilanguage platform — something many enterprises will find appealing.

Want to know more? SAS has created a webinar with additional information here

Takeaway 4:
Clear communication remains essential

The evolution of the data and analytics ecosystem into an integrated cohesive (cloud) platform offers exciting prospects for organisations seeking to unlock the full potential of their data assets. These figures show that if organisations can learn to navigate the evolving landscape, they can harness the transformative power of integrated cloud data platforms.

  • By 2024, enterprises that primarily build applications leveraging a D&A ecosystem from a single cloud service provider will outperform competitors, despite vendor lock-in.
  • By 2024, 50% of new system deployments in the cloud will be based on a cohesive cloud data ecosystem rather than on manually integrated point solutions.
  • Through 2025, powerhouse cloud ecosystems will consolidate the vendor landscape by 30% leaving customers with fewer choices and less control of their software destiny.

Takeaway 5:
SAS 9.4 M9 released, a major leap in security and integration

The latest maintenance release of SAS 9.4, M9, was launched on 17 June 2025. It delivers a robust security enhancements, Azure integration improvements and a clear roadmap for the future beyond M9.

  • Security first: M9 introduces automated TLS setup, drastically reducing manual configuration and errors. A centralised certificate management improves governance and consistency. The release also brings Multi-Factor Authentication (MFA) with Single Sign-On (SSO) integration, and commits to quarterly security updates
  • Azure & Viya integration: M9 strengthens interoperability with SAS Viya and Microsoft Azure, making hybrid deployments smoother and more scalable.
  • Extended support: SAS has confirmed standard support for 9.4 M9 through 2030, giving organisations long-term stability.
  • Future-proofing: Even more importantly, SAS has confirmed that another maintenance release (M10) is on the roadmap, reassuring us for continued investment in the 9.4 platform.

Why these insights matter

SAS Innovate made one thing clear: the data and AI landscape is evolving fast, and organisations need to keep pace. From synthetic data and agentic AI to hands-on tools like Viya Workbench, the innovations on display aren’t future concepts, they’re practical enablers that can drive smarter, safer, and more efficient decisions today.

For businesses aiming to future-proof their data strategy, SAS is positioning itself as a powerful ecosystem to build on. And for data & AI teams like ours at LACO, the event reaffirmed the importance of staying curious, experimenting with new tools, and continuously refining your approach in an ever-changing environment.

5 key takeaways from SAS Innovate that could reshape your AI strategy2026-02-16T08:37:54+00:00

Hybrid data architecture | Connect SAS, Microsoft and Databricks with LACO

Large organisations rarely rely on a single data platform. SAS, Microsoft and Databricks each bring unique strengths, but when they operate separately they create silos, duplicated work and inconsistent reporting. A hybrid data architecture brings these platforms together in one governed and connected ecosystem.

LACO helps organisations design such architectures with a pragmatic and structured approach that aligns people, processes and platforms around one shared data strategy but without paying the integration tax of getting it all together.

The challenge

SAS, Microsoft and Databricks each offer value, but without a clear architecture they operate in isolation. Teams move data manually, ETL processes are repeated on different platforms and reports no longer match. Authentication rules differ, lineage is inconsistent and on premise tools are difficult to integrate with cloud environments. This creates delays, frustration and rising cost without delivering real progress. Organisations do not want to replace tools that work. They want an architecture that connects them.

The solution

The first step is understanding the full landscape. LACO performs a complete assessment of tools, processes and dependencies to reveal how data moves today. We then design a modular landscape and challanges that assigns clear roles to each platform. Integration is achieved through secure APIs, automated pipelines and consistent naming and governance standards.

We establish a shared governance layer that covers access rights, metadata, lineage and documentation so all platforms behave as one ecosystem. Change management ensures that IT, business users and data owners understand and trust the new setup.

Results

The organisation gains a connected, governed and scalable platform where everything works together rather than in parallel. The architecture eliminates duplicated work and ensures that data is traceable and compliant across all environments. Teams collaborate more effectively and speak the same data language.

Performance improves through automation, cost overlap decreases and the ecosystem becomes ready for AI and modern analytics. Instead of replacing existing investments, the organisation extends their value with clarity and control.

Ready for the next step?

Hybrid data architecture | Connect SAS, Microsoft and Databricks with LACO2026-01-15T10:12:52+00:00
Go to Top