What’s New in Microsoft Intune – November 2023

Microsoft Intune updates on a weekly basis with new features and enhancements. As an MSP working with enterprise customers, it’s important to stay up-to-date on these changes so you can continue providing excellent service and support. In this comprehensive blog post, I’ll summarize all the key Intune developments over the past month and explain how they can benefit your customers.

Week of October 30, 2023

Device Security

  • Strict Tunnel Mode is now available for Microsoft Tunnel for MAM on Android and iOS/iPadOS devices. This allows admins to configure Microsoft Edge so internet traffic is blocked if the VPN disconnects. To enable, create an Edge app configuration policy and set “StrictTunnelMode” to True.

Week of October 23, 2023 (Service release 2310)

App Management

  • Android Company Portal users on versions below 5.0.5333.0 will be prompted to update to avoid enrollment issues with the Authenticator app.
  • The iOS Conditional Launch setting now includes a “warn” action for the Min SDK version requirement. This warns users if the requirement isn’t met.
  • Minimum OS for Apple LOB and store apps can now be set to the latest releases – iOS/iPadOS 17.0 and macOS 14.0.

Device Configuration

  • OEMConfig profiles over 350 KB that don’t deploy successfully no longer show as “pending” in the Intune portal.
  • Android Enterprise now supports installing/uninstalling mandatory LOB apps on AOSP devices.
  • Pre-install and post-install scripts can now be configured for unmanaged macOS PKG apps.
  • FSLogix settings from the Settings Catalog and ADMX templates can now be configured directly, without needing to import them.

Device Enrollment

  • Web enrollment with JIT registration is now generally available for personal iOS/iPadOS devices. Reduces prompts during Setup Assistant.

Device Management

  • The Intune addons page now shows enhanced details on licenses, capabilities, and billing.
  • Remote Help for Android is now generally available for Zebra and Samsung dedicated devices.

Device Security

  • Defender Update controls to deploy Defender updates is now generally available.
  • Elevation report by Publisher is a new Endpoint Privilege Management report showing elevations by app publisher.
  • Intune Endpoint Security policies for EDR now support macOS and Linux devices managed by Defender for Endpoint.

Week of October 16, 2023

Tenant Administration

  • The endpoint.microsoft.com URL now redirects to intune.microsoft.com.

Week of September 18, 2023 (Service release 2309)

App Management

  • Intune now supports MAM for Microsoft Edge for Business on personal Windows devices using APP and ACP.

Device Configuration

  • Android Enterprise OEMConfig apps version 11+ must use Zebra’s new OEMConfig app. Older versions use the Legacy app.
  • Config Refresh settings are available in the Windows Insider Settings Catalog.
  • New settings added to the Apple Settings Catalog for macOS and iOS/iPadOS.

Device Enrollment

  • Support for single sign-on during enrollment for Android Enterprise corporate-owned and fully managed devices.

Device Management

  • Introducing Remote Help for macOS devices.
  • The management certificate expiration date can now be viewed and filtered in the Devices list.
  • Windows Defender Application Control is renaming to App Control for Business.
  • Intune now supports iOS/iPadOS 15.x as the minimum version.

Device Security

  • Software updates and passcode policies can now be managed on Apple devices using the Settings Catalog.
  • Mvision Mobile is now named Trellix Mobile Security.

Intune Apps

  • New protected apps added: BuddyBoard and Microsoft Loop.

Monitor and Troubleshoot

  • Policy compliance and Setting compliance reports are now generally available.

Week of September 11, 2023

Device Configuration

  • Remote Launch introduced in Remote Help for Windows – launches sessions from the Intune portal.

Week of September 4, 2023

Device Management

  • Microsoft Intune is ending support for Android device administrator on GMS devices in August 2024. Customers should switch to another management method before then.

Key Takeaways

The last few months have seen important developments in Intune’s app management, device configuration, enrollment, security, and monitoring capabilities.

Some key highlights include Remote Help for Android and macOS to improve IT support, new endpoint security features like Defender Update controls and Endpoint analytics, and mobile management enhancements like support for latest OS versions and single sign-on.

As an MSP, staying up-to-date on Intune’s capabilities allows you to take full advantage of the platform and provide the best solutions to customers. Referring to this summary can help you quickly understand what’s changed recently. Let me know if you need any clarification or have additional questions!

How Microsoft 365 Copilot Uses Large Language Models and Your Data

Microsoft recently demonstrated how Microsoft 365 Copilot works by leveraging large language models that interact with your organizational data. Copilot transforms how we work by providing intelligent suggestions, summaries, and content generation within our everyday workflows. But how exactly does it work while also respecting privacy and security? Let’s break it down.

Where LLMs Get Their Knowledge

  • Large language models (LLMs) are trained on massive public datasets:
  • Books, articles, websites
  • Learn language, context, meaning
  • You interact with an LLM using a prompt – a statement or question
  • The LLM generates a response based on its training and the context from your prompt
  • As you chat, the conversation provides more context so the LLM can stay relevant
  • The chat history is wiped after each conversation

Providing Context to an LLM

To illustrate how providing context in a prompt works:

  • Asked Microsoft Bing Chat (powered by GPT) what color shirt I’m wearing without any context
  • It responded that it can’t see me to know
  • Asked again and described my outfit in the prompt
  • It then responded using the context I provided
  • In a new chat, asked again what color shirt I’m wearing
  • It responded the same as the first time, showing the context doesn’t persist

How Microsoft 365 Copilot Works

Copilot has several core components:

  • Large language models hosted in the Microsoft Cloud via Azure OpenAI
  • Powerful orchestration engine
  • Integrated into Microsoft 365 apps
  • Leverages Microsoft Search for information retrieval
  • Uses Microsoft Graph for organizational data and relationships
  • Respects per user access permissions to content and Graph data

For example, in Teams:

  • User asked: “Did anything happen yesterday with Fabrikam?”
  • Copilot orchestrator searched user’s accessible data for relevant context:
  • Email from Mona
  • Project files user had access to
  • Sharing notifications for contract review
  • Copilot combined this context into a prompt for the LLM
  • LLM generated a response summarizing the Fabrikam activities
  • Copilot cited each data source for transparency

Generating New Content

Copilot can also use your data to help generate new content, like proposals:

  • LLM is trained on proposal document structure and language
  • Copilot orchestrator retrieves relevant content from documents you select
  • This context is added to the LLM prompt
  • LLM generates a draft proposal leveraging your existing data
  • Generated content is a prompt response – not retained or used to train the LLM
  • All data retrieval respects user permissions

Maintaining Privacy and Security

  • Prompts to LLM with organizational data provide context but are not retained
  • LLM responses are not used to train the foundation models
  • All data retrieval follows user access permissions
  • Learn more about Microsoft’s responsible AI principles

Hope this breakdown demystifies how Copilot taps into large language models and your data while maintaining security and privacy. Stay tuned for more Copilot updates!

Boost Productivity with Copilot and DBGM

As an IT services firm focused on digital transformation, DBGM Consulting can help you prepare for Microsoft 365 Copilot. Our AI experts can:

  • Audit data sources Copilot will leverage
  • Refine content permissions and governance
  • Validate technical readiness
  • Develop change management plans
  • Provide user training

Contact us to maximize Copilot’s productivity benefits while safeguarding privacy and security. DBGM Consulting has the expertise to implement Microsoft’s latest innovations.

Simplify Endpoint Management with Intune and Windows Autopilot

Managing endpoints is growing ever more complex with remote work and proliferating device form factors. Legacy tools like Configuration Manager (MECM) can’t keep pace with today’s demands for automated, scalable management capabilities.

In this guide, we’ll explore how Microsoft’s modern Intune and Autopilot solutions can radically simplify endpoint management. By replacing fragmented tools with unified cloud services, IT teams can finally eliminate deployment hassles and secure any device.

Streamline Deployments with Autopilot

Windows Autopilot provides a game-changing, cloud-based approach to device deployment. Simply register new devices under your Azure AD tenant and assign desired profiles. When users power on the device, Autopilot fully configures Azure AD join, policies, apps, and settings automatically.

With Autopilot, there’s no need to physically touch or customize each device. Your team defines profiles centrally that get consistently applied to any auto-registered device, even remote units shipped directly to users. You can fully configure devices in minutes instead of hours or days.

Centralize Management with Intune

Microsoft Endpoint Manager (MEM) provides unified management capabilities spanning Intune for cloud-connected devices and Configuration Manager for traditional on-prem systems.

Intune delivers robust management for Windows, iOS, Android, macOS devices all from a unified cloud console. Key capabilities like conditional access policies, app/update deployments, and device compliance monitoring help secure and control endpoints centrally.

Migrate to Modern Management

For organizations with Configuration Manager already, Microsoft provides multiple options to begin shifting towards Intune’s modern approach. These include moving specific workloads like co-management for shared oversight of Windows 10 devices.

You can also migrate end-to-end to Intune while retaining access to ConfigMgr reports and data. Our consultants can help assess the optimal path forward based on your environment and needs. The future of endpoint management is in the cloud.

Adopt Cloud-First for Windows 11

Windows 11 introduces new security requirements like TPM 2.0 and Secure Boot that ConfigMgr can’t deliver. The upcoming Intune support for Windows Autopilot Reset will allow self-healing devices that automatically restore compliant states.

With cloud-first management via Intune and Azure AD, you gain identity-driven security and productivity. Simplify licensing as well by consolidating ConfigMgr and Intune licenses into Microsoft Endpoint Manager.

Drive Better Experiences and Security

Unifying endpoint management in the Microsoft cloud enables IT to securely support the new era of work from anywhere on any device. Intune and Autopilot reduce help desk tickets through self-service and automating deployments.

Users stay productive with seamless access to corporate resources. With robust cloud-based management, your organization can embrace BYOD and hybrid workstyles without compromising security. Don’t let legacy tools hold back your endpoint management capabilities. Engage with our Intune experts at DBGM to begin simplifying today.

MemGPT: Giving Large Language Models Memory Management

Large language models (LLMs) like GPT-3 have shown impressive capabilities in generating human-like text. However, they still face limitations like small context windows and short-term memory loss. To overcome these constraints, researchers at UC Berkeley have developed memGPT – a virtual memory management system for LLMs. In this post, I’ll explain how memGPT works and its potential applications.

The Problem: LLMs Have Limited Memories

Current LLMs suffer from restricted context windows. For example, GPT-3 can only process 8192 tokens at a time. This is like having a tiny short-term memory that resets every few seconds. As a result, LLMs exhibit short-term memory loss. Talking to them feels like chatting with Dory from Finding Nemo – having to constantly remind them of the conversation history and goals.

LLMs also can’t process large documents as their context windows are smaller than most books. For instance, a typical novel may contain over 200,000 tokens, requiring way more contexts than LLMs can handle currently.

Introducing MemGPT – Memory Management for LLMs

To overcome these problems, researchers at UC Berkeley developed memGPT. It adds a virtual layer of memory management on top of existing LLMs.

MemGPT provides two key capabilities:

Long-term memory – It allows preserving information beyond a single context window, enabling retrieval of old chat history and document details.

Modular processing – MemGPT can break down tasks into sub-routines and chain multiple functions together, like searching archives and then sending a message.

How MemGPT Works

MemGPT augments LLMs with a hierarchical memory structure and operating system-like functions for memory management. It has three main components:

1. Event-based control flow – Different events like user messages, scheduled tasks, or new documents trigger the LLM to make inferences and take actions.

2. Memory hierarchy – There are two memory types – main context (like RAM) and external context (like disk storage). Information is moved between them using memGPT’s functions.

3. OS functions – Special functions to search databases, analyze documents modularly, rewrite responses based on user feedback, and more.

This architecture enables memGPT to leverage long-term memory from external context while maximizing the limited main context window for immediate inferences. Chaining functions together allows completing complex tasks.

Testing MemGPT

The researchers tested memGPT on tasks like:

  • Analyzing lengthy documents (like books) which exceeds LLM context limits
  • Having long-form conversations while maintaining chat history

In both cases, memGPT significantly outperformed regular LLMs like GPT-3. It could comprehend large documents as a whole and have dialogs with real long-term memory.

Potential Applications

By overcoming memory limitations, memGPT opens up many new possibilities for LLMs, including:

  • Personalized bots – Maintain user memory and preferences during conversations
  • Research/writing assistants – Process papers, books, archives while answering queries
  • Customer service agents – Reference databases, ticket history to resolve issues
  • Smart tutors – Learn student progress and customize curriculum over time

MemGPT shows that with some clever memory management, LLMs can indeed achieve more human-like capabilities. I’m keen to see OpenAI itself integrate such virtual memory architectures into future GPT versions.

While memGPT itself might be forgotten over time as LLMs evolve, it represents an important step towards more versatile and useful AI systems. Let me know your thoughts on memGPT and how you envision using it!

Demystifying AI: Your Guide to Understanding Artificial Intelligence

Artificial intelligence (AI) is advancing rapidly, but much confusion still exists about what it can and can’t do. In this comprehensive blog post, I’ll explain the basics of AI – busting myths, outlining real-world applications, and addressing concerns like job losses. Read on for a full lowdown on the current state of this transformative technology!

What Exactly is AI?

The term was coined back in 1955 to mean training computers to imitate human learning and intelligence. The goal is to enable systems to analyze information and draw inferences beyond just following coded instructions.

While true thinking machines don’t exist yet, AI has become adept at pattern recognition using approaches like:

  • Reinforcement learning: Provide feedback on tasks so the system iteratively improves. Used in robotics.
  • Neural networks: Loosely modeled on the brain; analyze huge datasets to make predictions. Used in computer vision.
  • Generative adversarial networks (GANs): Two networks compete – one generates content, another judges quality. Used in creative applications.

So in a nutshell, AI refers to computers exhibiting human-like intelligence in narrow tasks through statistical learning approaches.

Chatbots – The Poster Child of Modern AI

The AI application capturing most public attention currently is chatbots like ChatGPT. These are trained on massive text datasets to generate conversational responses.

Ask ChatGPT a question, and it will reply with surprising coherence. Tell it to summarize a novel or compose a poem, and it obliges convincingly! This is thanks to AI’s pattern matching prowess.

But chatbots also make silly errors by focusing too much on patterns. Asked whether a pound of feathers or steel weighs more, ChatGPT gets confused by the twist. And Google’s Bard goofed up a space telescope fact recently.

So while chatbots seem impressively intelligent, they fundamentally lack true understanding. Their capabilities are confined to statistical predictions within their training data.

Deepfakes – Using AI for Audio/Video Trickery

Another increasingly mainstream AI application is deepfakes – using neural networks to generate fake but realistic images, videos, and audio.

Doctored videos of celebrities and influencers are already alarmingly common on social media. Startups are offering voice cloning services, with implications for fraud. As the tech becomes accessible to everyone, experts warn of an impending surge in AI-powered misinformation and scams.

While most current deepfakes have some giveaways, the technology is evolving rapidly. Soon it may get impossible to tell AI creations from real content. This presents a unique new threat in the digital world.

Jobs, Weapons, and Existential Risks – Assessing the Dangers of AI

AI provokes both utopian and dystopian visions. Some fear it could even spell doom for humanity if powerful systems spiral out of control.

While AI weapons and job losses are concerning real risks, the current state of the technology doesn’t support sentient Terminator robots bent on destroying mankind. Those remain squarely in the realm of science fiction.

That said, experts advocate for regulations and oversight frameworks as AI advances. There are reasonable worries about military applications and financial systems relying blindly on black-box algorithms.

Prominent scientist Elon Musk has described AI as potentially more dangerous than nuclear weapons. Global coordination efforts are needed to address emerging challenges responsibly.

Will AI Take Away Our Jobs?

In the short run, AI will more likely transform jobs rather than wholesale eliminate them. While certain roles like transcription clearly face disruption, most workers are expected to adopt AI as another productivity tool – similar to spreadsheets or email.

But longer term, the technology could automate a substantial chunk of tasks humans currently do, disproportionately affecting white-collar information workers. Law firms are already utilizing GPT-based tools.

The key would be re-training and job transition support programs to ensure society benefits from AI’s efficiencies and productivity gains, rather than just displacing large swathes of people.

Exploring AI Yourself

To experience today’s AI, try chatbots like ChatGPT and Bing. Experiment with image generation using Dall-E 2 and Midjourney. Notice the AI working quietly in smartphone apps.

While AI hasn’t yet achieved the flexibility and common sense of human cognition, its ability to parse patterns already makes it a versatile technology. We hope this post has provided a balanced overview of the promise and limitations of artificial intelligence! Let us at DBGM Consulting know how we can take your company into a positive direction with AI.

1 2 3 6