All Topics

Top

All Topics

I'm trying to install the Gitlab Add-on on my distributed system with search head and indexer clustering. Where all does the Gitlab addon needs to be installed at?  https://splunkbase.splunk.com/ap... See more...
I'm trying to install the Gitlab Add-on on my distributed system with search head and indexer clustering. Where all does the Gitlab addon needs to be installed at?  https://splunkbase.splunk.com/app/4381   PS: I installed the app on the sh cluster but the Gitlab addon UI keeps on loading only. I'm not able to install configure the app with the Gitlab token
I have a metric from AWS for the number of messages visible in a SQS queue, which gets computed every 5 minutes.  2023-08-02 11:50:00    13.3 2023-08-02 11:55:00    0.0 2023-08-02 12:00:00    33.... See more...
I have a metric from AWS for the number of messages visible in a SQS queue, which gets computed every 5 minutes.  2023-08-02 11:50:00    13.3 2023-08-02 11:55:00    0.0 2023-08-02 12:00:00    33.8 2023-08-02 12:05:00    0.0 This means that there were 13 messages in the queue, and 5 minutes later they were gone (processed). Then there were 33, then they were gone (processed) If messages do not get processed, I'd expect this number to continue to grow and not decrease.  I need to set up an alert when that happens. Is there some way to alert when a value grows, say, over 5 rows?  Or, is there a way to compare a value to itself at different points in time?   
It is possible to clone dashboards from the Enterprise Security app into a private custom app so that I can make modifications to it for the users in my environment? I tried cloning the Identity Inve... See more...
It is possible to clone dashboards from the Enterprise Security app into a private custom app so that I can make modifications to it for the users in my environment? I tried cloning the Identity Investigator dashboard, but it straight up won't load at all, so I'm wondering if this is even possible. Thanks!
When the page gets reloaded for my Dashboard Studio dashboard all of the inputs get reset. I have to enter all the inputs again which is disruptive to my workflow. This is particularlly annoying in ... See more...
When the page gets reloaded for my Dashboard Studio dashboard all of the inputs get reset. I have to enter all the inputs again which is disruptive to my workflow. This is particularlly annoying in two use cases: 1. When I reboot my machine chrome remembers my tabs but it causes the page to reload. So after a reboot I have to enter all the inputs again. 2. When the SSO times out at my company it causes a login page to load, then after auth it navigates back to the dashboard. This is annoying because it can happen any time of the day. Is there a solution for this? Can I encode the input values into the URL for the dashboard so it will automatically load with the correct values even if the page is reloaded?   Here is the source for my inputs { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Time Range" } { "options": { "items": [ { "label": "All", "value": "US, CA, GB" }, { "label": "US", "value": "US" }, { "label": "CA", "value": "CA" }, { "label": "GB", "value": "GB" } ], "defaultValue": "US, CA, GB", "token": "selectedRegion" }, "title": "Region", "type": "input.dropdown" } { "options": { "items": [ { "label": "Unique Companies", "value": "realms" }, { "label": "Percentage of Total Traffic", "value": "percentage" } ], "defaultValue": "realms", "token": "selectedMode" }, "title": "Mode", "type": "input.dropdown" }  
Splunk Lantern is a customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Sta... See more...
Splunk Lantern is a customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month we’re sharing all the new articles we’ve published over the past month, with lots of interesting new use cases, product tips, and data articles. We’re also asking for your vote in our Customer Choice Content Competition - over the quarter we’ve been developing articles that meet direct asks from you, our customers, and now we want to hear which one is your favorite. Read on to find out more! This Month’s New Articles We’ve published so many interesting articles this month that it’s hard to pick a few to focus on! The definitive guide to best practices for ITSI is a comprehensive guide to best practices for Splunk ITSI. Compiled by ITSI SMEs at Splunk and designed for ITSI administrators, the guide provides essential guidelines to ensure optimal operations and an excellent end-user experience, helping you to unlock the full potential of ITSI. You'll learn recommended best practices for configuring and optimizing ITSI deployments, including data ingestion, service modeling, notable event management, and advanced analytics, and more. This guide will continue to grow, so look out for more updates in the coming months! We’re also proud to publish our first article on Splunk Mission Control. Getting started with Splunk Mission Control for unified security operations is a great guide to anyone who’s new to, or curious about, Mission Control. This article walks you through an example investigation from the perspective of a SOC analyst using Mission Control, showing you how to work with events and run automated responses with Splunk Mission Control playbooks. Getting Started with the Google Chrome App for Splunk helps SOC analysts and IT security professionals address the growing risks from risky browser behavior. Learn how to use the Google Chrome Add-on and App for Splunk to bring Chrome Threat and Data Protection events into Splunk, improve investigations with prebuilt dashboards, and automate responses such as blocking risky extensions. The step-by-steps in the article help you to configure the Splunk platform and set up the integration in Chrome Browser Cloud Management (CBCM). Finally, Managing the lifecycle of an alert is a new article that brings together several existing Lantern use cases into a complete alerts management workflow. It takes guidance from Docs and blends it with best practices and example configurations from Splunk experts, allowing you to create a comprehensive approach to managing the lifecycle of an alert, encompassing detection, triage, investigation, and remediation. Those articles are just scratching the surface of everything we’ve published this month. Here’s the full list of articles now live across Platform, Security, and Observability. Platform Routing root user events to a special index Hiding rows or panels in dashboards with XML Masking IP addresses from a specific range Running the Splunk OpenTelemetry Collector on Darwin Collecting Mac OS log files Mac OS Security Understanding the Event Sequencing engine Following best practices for designing playbooks Using a playbook design methodology Understanding SOAR case management features Customizing Enterprise Security dashboards to improve security monitoring Managing data models in Splunk Enterprise Security Optimizing correlation searches in Enterprise Security Using the workbench in an Enterprise Security investigation Comparing security domain dashboards in Enterprise Security Using protocol intelligence in Enterprise Security Observability Using SRE golden signals for KPIs Using the Monitoring and Alerting Content Pack Configuring notable event timestamps to match raw data Using the correct KPI statistical functions for alerting Limiting the number of KPIs per service Choosing KPI base searches over ad hoc searches Review alerts received when a pending state occurs Cast Your Vote in Lantern’s Customer Choice Content Competition! Lantern is running a competition for the best article created in the past quarter that answers a direct ask from you, our customers. You might have seen one of our surveys popping up on our site asking you what content you’re looking to see on Lantern, and Splunkers from around the company have been working to answer your call. We’ve chosen six articles that we’ve published over the past quarter that answer these direct customer asks - from content for working with Mac files, to GitLab content, OTel and more - and we’re asking all Splunk customers to vote on their favorite. We want to hear what you think is the most useful, the most interesting, or simply the Splunkiest out of the bunch. Cast your vote using this form by the 15th August! Preparing for certificate-based authentication changes on Windows domain controllers Running the Splunk OpenTelemetry Collector on Darwin Collecting Mac OS log files Getting GitLab CI/CD data into the Splunk platform Sending GitLab webhook data to the Splunk platform Customizing the Splunk OpenTelemetry distribution to accommodate unsupported use cases We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Senior Lantern Content Specialist for Splunk Lantern
Hello I have some array data located within a field in my data. It comes from DBConnect and isnt exactly JSON. Im trying to figure out how to extract this data so I can make it useful but Ive gone ... See more...
Hello I have some array data located within a field in my data. It comes from DBConnect and isnt exactly JSON. Im trying to figure out how to extract this data so I can make it useful but Ive gone through several iterations and have been unsuccessful so far. Hers a sample record(I picked a fairly long one for an example):       2023-08-02 08:23:28.000, CASE_ID="50031iIQAQ", STAGE="Initial", EVENT_TYPE="SS", SUBMISSION_DATE="2023-08-02 06:23:23.0", SUBMISSION_TYPE="Application INITIAL SUBMISSION", DESTINATION=""Application ", SUBMISSION_DATA="{ "submissionType": "Application INITIAL SUBMISSION", "submissionDate": "2023-08-02 06:23:23", "stage": "Initial", "repository": "central", "xxxxMessage": { "header": { "timeStamp": "2023-08-02 06:23:24", "source": "XXXXX", "messageType": "XXX CCC SSS", "domain": "APPLICATION/SUBMISSION" }, "body": { "submissionUnit": { "field": [ { "value": "500871iIQAQ", "name": "sourceId" }, { "value": "CCC VVVV", "name": "submission_unit_type" }, { "value": "1", "name": "submission_unit_number" }, { "value": "2023-08-02 06:23:23", "name": "submit_date" }, { "value": "2023-08-02", "name": "xxxx_received_date" } ] }, "submission": { "field": [ { "value": "5000871iIQAQ", "name": "sourceId" }, { "value": "00132546", "name": "submission_event_id" }, { "value": "XXXX", "name": "submission_type" }, { "value": "1", "name": "submission_number" }, { "value": "Pending", "name": "submission_status" }, { "value": "2023-08-02", "name": "submission_status_effective_date" }, { "value": "TTTT 1", "name": "xxxxx_xxxxxx" }, { "value": "String of data goes here", "name": "proposed_change" } ] }, "referencedMonographs": { "referencedMonograph": [ { "field": [ { "value": "a001YmysUAC", "name": "sourceId" }, { "value": "XXX2", "name": "ref_monograph_number" }, { "value": " String of data goes here", "name": "ref_monograph_description" } ] } ] }, "organizations": { "organization": [ { "field": [ { "value": "a07000nMzDQAU", "name": "sourceId" }, { "value": "String of data goes here", "name": "organization_name" }, { "value": "117185493", "name": "xxxx_number" }, { "value": "12071", "name": "company_global_id" }, { "value": "Requestor", "name": "contact_type" }, { "value": "String of data goes here", "name": "address_line1" }, { "value": "CityName", "name": "city" }, { "value": "US", "name": "country" }, { "value": "XX", "name": "state" }, { "value": "XXXXX-AAAA", "name": "postal_code" } ], "contact": { "field": [ { "value": "a07nMzDQAU", "name": "sourceId" }, { "value": "Requestor", "name": "contact_type" }, { "value": "XXXX", "name": "first_name" }, { "value": "CCCC", "name": "last_name" }, { "value": "2221112222", "name": "phone_number" }, { "value": "test@test.com", "name": "email_address" }, { "value": "1111 NW 1st St", "name": "address_line1" }, { "value": "CCCCC", "name": "city" }, { "value": "United States", "name": "country" }, { "value": "CD", "name": "state" }, { "value": "11111", "name": "postal_code" } ] } } ] }, "ingredients": { "ingredient": [ { "field": [ { "value": "a041R8IbQAK", "name": "sourceId" }, { "value": "XXXXXXXXX", "name": "sssss_aaaaaa" }, { "value": "U8LYN0Y118", "name": "CCCC" }, { "value": "311218848", "name": "wwwwww_global_id" }, { "value": "String of data goes here", "name": "XXXXX_strength" }, { "value": "XX", "name": "numerator_unit" }, { "value": "12.00", "name": "numerator_strength" }, { "value": "1", "name": "denominator_unit" }, { "value": "1.00000", "name": "denominator_strength" }, { "value": "40", "name": "CCCCCC" }, { "value": "Month", "name": "xxxxx_frequency" }, { "value": "String of data goes here", "name": "age_group" }, { "value": "String of data goes here", "name": "xxxxx_form" }, { "value": "xxxxx", "name": "xxx_xxxx_xxx" }, { "value": "xxxxxx/ccccc", "name": "xxxxx_class" }, { "value": "0117984AB", "name": "xxxxx_xxxxx" }, { "value": "U8LY118", "name": "xxxxx_value" }, { "value": "xx", "name": "xxxxx_type" } ] } ] }, "xxxx_event_id": "00132542", "contacts": { "contact": [ { "field": [ { "value": "a070nMzEQAU", "name": "sourceId" }, { "value": "String of data goes here", "name": "contact_type" }, { "value": "XXXXXXXX", "name": "first_name" }, { "value": "CCCCCCCC", "name": "last_name" }, { "value": "+12223334444", "name": "phone_number" }, { "value": "test@test.com", "name": "email_address" }, { "value": "321 Drive", "name": "address_line1" }, { "value": "XXXXXXXXX", "name": "city" }, { "value": "United States", "name": "country" }, { "value": "VVVVVVV", "name": "state" }, { "value": "11111", "name": "postal_code" } ] } ] }, "attachment_metadata": { "total_attachment_count": 1, "application_attachment": { "type": "application", "submission_attachment": { "sub_submission": [ { "name": "String of data goes here", "attachment_count": 1 } ], "name": "String of data goes here" }, "name": "CCCCC" } }, "application": { "referencedMonographs": { "referencedMonograph": [ { "field": [ { "value": "a0Z3S000001YmynUAC", "name": "sourceId" }, { "value": "M009", "name": "ref_monograph_number" }, { "value": " String of data goes here", "name": "ref_monograph_description" } ] } ] }, "field": [ { "value": "5003872puQAA", "name": "sourceId" }, { "value": "XXXXX", "name": "application_type" }, { "value": "12345678", "name": "application_number" }, { "value": "Pending", "name": "application_status" }, { "value": "2023-08-02", "name": "application_status_effective_date" }, { "value": "Requestor", "name": "requestor_role" }, { "value": "test1", "name": "application_justification" } ] } } }, "eventType": "WE", "documentDestination": "Applications", "caseID": "5003871iIQAQ", "attachments": [ { "type": "docx", "storageLocation": "/XXXX/Data Stored HEre", "processedInd": "N", "name": "Test 3.docx", "fileSize": "11885", "fileId": "0683S000001o43hQAA", "file_metadata": [ { "value": "String of data goes here", "key": "docCategory" }, { "value": "1.1", "key": "sectionNumber" }, { "value": "Table of Contents", "key": "sectionName" }, { "value": "5003iIQAQ", "key": "sourceId" } ], "contentVersionId": "xxx/cccc/dddddd/0693hQAM/0683001QAA/Test 3.docx" } ] }", XXXX_MESSAGE_ID="7eewrty6e9-00ea-4e58-981f-3cn56igb82f", CREATED_BY="XXXX_CCC_APP", CREATED_DATETIME="2023-08-02 09:23:25", MODIFIED_BY="XXXX_CCC_APP", MODIFIED_DATETIME="2023-08-02 09:23:28", PROCESSED_STATUS="success"         Anyone have any ideas how I might accomplish getting the SUBMISSION_DATA field extracted properly? Thanks for the help
A new entry appears every few days in the Forwarder Management area. Phone homes are only working for the latest entry. Same Host Name, same IP Address, only the Client Name is different. Any i... See more...
A new entry appears every few days in the Forwarder Management area. Phone homes are only working for the latest entry. Same Host Name, same IP Address, only the Client Name is different. Any ideas?
Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.0.2305! Analysts can benefit from: Easier discoverability of Splunk Observability Cloud with in-product dem... See more...
Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.0.2305! Analysts can benefit from: Easier discoverability of Splunk Observability Cloud with in-product demo requests and educational content to learn how to examine logs in context with metrics and traces  Dashboard Studio improvements  New post-conversion report when a Classic dashboard is converted with the Clone in Dashboard Studio feature that details which objects or options need manual adjustments after automatic conversions Updated code editor that can be expanded while making edits in the UI New ability to configure drilldowns from dashboards to custom searches and reports New ability to configure workflow actions to work with Events Viewer visualizations  Admins can benefit from: Expanded search capabilities with Federated Search for Amazon S3 to get Splunk insights from Amazon S3 buckets without the need for data ingestion (coming soon) Simplified certificate rotation with new REST API endpoint to rotate server certificate without restarting the server Streamlined process for adding certificates with support for OS certificate trust store and certificate management API Python 2 is in the process of deprecation and soon will no longer be available in coming releases. Older jQuery libraries are no longer supported but can be enabled by support for emergency needs, with jQuery 3.5 set as default. Your SaaSy (Splunk-as-a-Servicey) Updater, Judith Silverberg-Rajna, Splunk Cloud Platform
I am trying to create an alert or a report to track the number of deferred searches. We had an issue where the cluster captain deferred a massive amount of searches, and it messed up a few things. We... See more...
I am trying to create an alert or a report to track the number of deferred searches. We had an issue where the cluster captain deferred a massive amount of searches, and it messed up a few things. We are trying to create an alert to maybe mitigate that in the future. In addition to asking the best way to create a alert for this, id also like some more clarification on how to find the deferred searches.  Through the monitoring console,  either through the DMC or the Cluster Master,  I thought i had seen a panel for deferred searches, but i can not find one now. And when i run the search        index=_internal earliest=-24h "status=skipped" sourcetype=scheduler | stats count by host app | sort - count       I get results, but if i change status to deferred, which i assume is a status, I do not get anything. I was suggested to run        | rest /services/search/jobs | search status=deferred | table id, search, app, owner, earliest_time, latest_time, status, sid       but I do not get any status. Status is not a field.   The main question I have is How do I access the number of deferred searches? If i can find that, I can run stats count on it.    Thank you. 
I am trying to dig through some records and trying to get the q (query) from the raw data, but I keep getting data back that includes a backslash after the requested field (mostly as a unicode charac... See more...
I am trying to dig through some records and trying to get the q (query) from the raw data, but I keep getting data back that includes a backslash after the requested field (mostly as a unicode character representation, /u0026 which is an &). For example, I have this search query to capture the page from which a search is being made (i.e., "location"):    index="xxxx-data" | regex query="location=([a-zA-Z0-9_]+)+[^&]+" | rex field=_raw "location=(?<location>[a-zA-Z0-9%-]+).*" | rex field=_raw "q=(?<q>[a-zA-Z0-9%-_&+/]+).*"| table location,q   Which mostly works viewing the Statistics tab, except that it occasionally returns the next URL parameter, i.e., location q home_page   hello+world   // this is ok about_page goodbye+cruel+world\u0026anotherparam=anotherval    // not ok  The second result should just be goodbye+cruel+world without the following parameter. I have tried adding variations on regex NOT [^\\] for a backslash character but everything I've tried has either resulted in an error of the final bracket being escaped, or the backslash character ignored like so: rex field=_raw  ... regex attempt result "q=(?<q>[a-zA-Z0-9%-_&+/]+[^\\\]).*"  goodbye+cruel+world\u0026param=val   "q=(?<q>[a-zA-Z0-9%-_&+/]+[^\\]).*"  Error in 'rex' command: Encountered the following error while compiling the regex 'q=(?<q>[a-zA-Z0-9%-_&+/]+[^\]).*': Regex: missing terminating ] for character class.   "q=(?<q>[a-zA-Z0-9%-_&+/]+[^\]).*"  Error in 'rex' command: Encountered the following error while compiling the regex 'q=(?<q>[a-zA-Z0-9%-_&+/]+[^\]).*': Regex: missing terminating ] for character class.   "q=(?<q>[a-zA-Z0-9%-_&+/]+[^\\u0026]).*" Error in 'rex' command: Encountered the following error while compiling the regex 'q=(?<q>[a-zA-Z0-9%-_&+/]+[^\u0026]).*': Regex: PCRE does not support \L, \l, \N{name}, \U, or \u.   "q=(?<q>[a-zA-Z0-9%-_&+/]+[^u0026]).*"  goodbye+cruel+world\u0026param=val" "q=(?<q>[a-zA-Z0-9%-_&+/]+[^&]).*"  goodbye+cruel+world\u0026param=val" "q=(?<q>[a-zA-Z0-9%-_&+/]+).*" goodbye+cruel+world\u0026param=val   "q=(?<q>[a-zA-Z0-9%-_&+/^\\\\]+)"  goodbye+cruel+world\u0026param=val   Events tab data is like:    Event apple: honeycrisp ball: baseball car: Ferrari query: param1=val1&param2=val2&param3=val3&q=goodbye+cruel+world&param=val status: 200   ... etc ... SO, how can I get the q value to return just the first parameter, ignoring anything that has a \ or & before it and terminating just at q? And please, if you would be so kind, include an explanation of why what you suggest works?  Thanks
Hello, I'm trying to create a  search to identify instances of bulk system deletions that took place within a one-minute time frame, and describe a method to consolidate all these results into a s... See more...
Hello, I'm trying to create a  search to identify instances of bulk system deletions that took place within a one-minute time frame, and describe a method to consolidate all these results into a single search query. Thanks 
I am populating the drop-down on the dashboard studio from the lookup table.  I want to display one column as the selection value in the drop-down but use another column value in the searches. I know... See more...
I am populating the drop-down on the dashboard studio from the lookup table.  I want to display one column as the selection value in the drop-down but use another column value in the searches. I know its possible to do in the classic board, but not sure about dashboard studio    Thanks for the help
I'm trying to create an outbound port on our splunk cloud instance without any luck. curl -X POST 'https://admin.splunk.com/important-iguana-u5q/adminconfig/v2/access/outbound-ports' \ --header 'Au... See more...
I'm trying to create an outbound port on our splunk cloud instance without any luck. curl -X POST 'https://admin.splunk.com/important-iguana-u5q/adminconfig/v2/access/outbound-ports' \ --header 'Authorization: Bearer eyJraWQiOiJzcGx1bmsuc2VjcmV0IiwiYWxnI...' \ --header 'Content-Type: application/json' \ --data-raw '{ "outboundPorts": [{"subnets": ["34.226.34.80/32", "54.226.34.80/32"], "port": 8089}], "reason": "testing federated search connection" }'  following the documentation I receive error code": "404-stack-not-found",     "message": "stack not found. Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." }   I've also try to import the curl under postman ma also there same answer ...   Could anyone face same issue kr Sandro
I need to understand which event types each search result record belongs to. My search: index="a" AND eventtype="*" I want the results to contain a field with a list of matching event types. ... See more...
I need to understand which event types each search result record belongs to. My search: index="a" AND eventtype="*" I want the results to contain a field with a list of matching event types. It would be ok for me to have a table with columns _raw and eventtypes. We have 10k+ event types and thousands of events. Is it possible to achieve? Thanks.
Hello Splunk Experts, I'm searching for ERRORS and WARN in the application from different servers and trying to collect these log lines to a stored area(Summary Index - may be Sourcetype) to avoid ... See more...
Hello Splunk Experts, I'm searching for ERRORS and WARN in the application from different servers and trying to collect these log lines to a stored area(Summary Index - may be Sourcetype) to avoid searching again & again on a huge volume of data. I don't want to use lookup because of the data volume. What is the procedure to get this done. Could someone please assist. Thanks in advance!!
Hi, I inherited a Splunk Enterprise environment. It is composed of 10 machines, divided into development and production (the latter with 2 clustered indexes). One machine serves as the Monitoring... See more...
Hi, I inherited a Splunk Enterprise environment. It is composed of 10 machines, divided into development and production (the latter with 2 clustered indexes). One machine serves as the Monitoring Console. I find an app (with several reports) present on both development and production. The report has a cron job 44 * * * * in production and 12 1/13 * * * in development and produces a KVStore lookup (with the exact same name as the report). Other reports in other apps make use of the lookup. On the Monitoring Console Search>Scheduler Activity>Scheduler Activity:Instance "Aggregate Scheduled Search Runtime" chart I see that same report displaying >60 Runtime(seconds) in 1 minute bins. How is that possible if the lookup (and not the report) is scheduled to run? If I click on the 1-minute bar on the chart, the drill-down opens another chart with, among others, fields PID, PPID as well as Elapsed Time (e.g. 744617.8700 within 50 seconds! Are these seconds at all?). Trying to understand where these values come from (and what is running the report), I only find similar results with this query:   index=_introspection 20664 14912   and this is an example of the results (edited):   {"datetime":"08-02-2023 11:24:37.275 +0200","log_level":"INFO","component":"PerProcess","data":{"pid":"14912","ppid":"20664","status":"W","t_count":"12","mem_used":"61.352","pct_memory":"0.53","page_faults":"0","pct_cpu":"0.00","normalized_pct_cpu":"0.00","read_mb":"0.000","written_mb":"0.109","fd_used":"28","elapsed":"754858.4800","process":"splunkd","process_type":"search","search_props":{"sid":"scheduler__nobody_Q0dJLXNlYXJjaGhlYWRzLWdscGktc2VhcmNoZXM__RMD53efdbadd3a98c46d_at_1690213440_46074","user":"splunk-system-user","app":"biz-searchheads-glpi-searches","label":"glpi_states_table_lookup","provenance":"scheduler","scan_count":"0","delta_scan_count":"0","role":"head","mode":"historical","type":"scheduled"}}}   I disabled the report in both development and production but the Monitoring Console chart above keeps showing the same results. Can somebody help me understand what is going on? How to find out where the results on the Monitoring Console for that report come from? Is this from the lookup (and not the report)? Is there some hidden mechanism running the report even if it is disabled? Thanks!  
Hi there ,  I created a splunk dashboard (classic) , which I want to download/export as PDF. However , I am unable to do same as trellis are not supported with PDF export. Also when I try to print/... See more...
Hi there ,  I created a splunk dashboard (classic) , which I want to download/export as PDF. However , I am unable to do same as trellis are not supported with PDF export. Also when I try to print/export - the dashboard widgets/panel's look gets hampered. Hence , I need help to explore the best way by which I can download dashboard view as same shown in classic view dark theme , kind of a snap view or in an image format sothat all graphs , look n feel will intact same. Also is it possible to schedule sending email about the same downloaded dashboard image.
Hi Team, how to export a dashboard (in Splunk cloud) to anonymous users.  Thanks.
02.08.2023 12:44:10.690 *INFO* [sling-threadpool-2cfa6523-0895-49ea-bb99-ae6f63c25cf6-32-Create Site from Template(aaa/jobs/abc)] bbb.CreateSiteFromSiteTemplateJobExecutor Private Site : ‘site4’ crea... See more...
02.08.2023 12:44:10.690 *INFO* [sling-threadpool-2cfa6523-0895-49ea-bb99-ae6f63c25cf6-32-Create Site from Template(aaa/jobs/abc)] bbb.CreateSiteFromSiteTemplateJobExecutor Private Site : ‘site4’ created by user : ‘admin’ with MRNumber :  ‘dr4’ I want to extract site , user and DR number and create table
I want to create a use case below is the scenario Let's suppose we have a device that will create a new temp user for every new session and deletes that user when the session is ended. Now I want... See more...
I want to create a use case below is the scenario Let's suppose we have a device that will create a new temp user for every new session and deletes that user when the session is ended. Now I want to check if a user is created but not deleted in 24 hours.  how can I achieve this absence of event?