All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I need help with  Splunk Query for below scenario: Query 1: index =abc | table src, dest_name, severity, action If it finds dest_name for any high and critical severity, it will look f... See more...
Hi All, I need help with  Splunk Query for below scenario: Query 1: index =abc | table src, dest_name, severity, action If it finds dest_name for any high and critical severity, it will look for computerdnsname in index xyz and there if it matches, it will display the result Query 2: index=xyz     
Hello! I can't manage to get Splunk to extract the following timestamp at import. 2015-12-01 00:00:00+00 Could you help me finding the format string required for proper extraction? Thanks!
Hi,   I am trying to get an App Key from your Controller: I tried the step 3 in the wizard: by selecting Install the iOS Agent and selected the option Auto-generate a new Mobile App Group    ... See more...
Hi,   I am trying to get an App Key from your Controller: I tried the step 3 in the wizard: by selecting Install the iOS Agent and selected the option Auto-generate a new Mobile App Group    After loading it generates the code snippet in the controller but it doesn't have an app key in the code snippet.   FYI I am using the sample app for running and exploring the AppD. To use the sample app I need to have an app key to keep it in appdelegate but here in the controller by generating it's getting empty. Is there any process or steps involved to generate?   ADEumAgentConfiguration *config = [[ADEumAgentConfiguration alloc] initWithAppKey:@" "];   ^ Post edited by @Ryan.Paredez for formatting and searchability    
Good morning! I updated my index cluster/shc to 8.2.6 yesterday, and everything went fairly well, except for the "Health of Splunk Deployment" screen (Top RIght of screen, the (!) next to my usernam... See more...
Good morning! I updated my index cluster/shc to 8.2.6 yesterday, and everything went fairly well, except for the "Health of Splunk Deployment" screen (Top RIght of screen, the (!) next to my username. On different instances, it is either completely broken:   Or working (ish)   In either case, I cannot scroll down to see the rest of it. I have restarted the instances, changed browsers, etc. It was definitely working before.
I have 2 sourcetype WinHostMon and wineventlog with Splunk add-on for Microsoft windows. After doing Asset and Identity configuration in Splunk ES. the lookup file is fine and I can see the results w... See more...
I have 2 sourcetype WinHostMon and wineventlog with Splunk add-on for Microsoft windows. After doing Asset and Identity configuration in Splunk ES. the lookup file is fine and I can see the results with the search command: | inputlookup test_assets2.csv and Asset Lookup information is also displayed in ES > Security Domains > Identity > Asset Center dashboard. But there is a problem that the enrichment fields for data like dest_asset, dest_asset_id, ... only appear in the WinHostMon sourcetype. Can someone help me pls? Thank you very much!
Hi Teams, I am newbie to splunk, I have log message like this: 10/04/2022 10:12:31.000   START RequestId: 46618528-6242-4eee-97b2-270e875bac1e Version: 165 END RequestId: 46618528-6242-4eee... See more...
Hi Teams, I am newbie to splunk, I have log message like this: 10/04/2022 10:12:31.000   START RequestId: 46618528-6242-4eee-97b2-270e875bac1e Version: 165 END RequestId: 46618528-6242-4eee-97b2-270e875bac1e REPORT RequestId: 46618528-6242-4eee-97b2-270e875bac1e Duration: 68.98 ms Billed Duration: 69 ms Memory Size: 256 MB Max Memory Used: 170 MB START RequestId: 9a8f3f1e-aa03-40d9-a064-bb10a47a92eb Version: 163 END RequestId: 9a8f3f1e-aa03-40d9-a064-bb10a47a92eb REPORT RequestId: 9a8f3f1e-aa03-40d9-a064-bb10a47a92eb Duration: 3.76 ms Billed Duration: 4 ms Memory Size: 256 MB Max Memory Used: 184 MB   I want to get MaxMemory Used value as percentage (Max Memory Used/Memory Size) in each message and create time chart to show this value. Can anyone help me in this!
timechart [stats count | eval range="$timeRange$" | eval search=case(range=="-6h", "span=30m ", range=="-1d", "span=1h ", range=="-3d", "span=2h ", range=="-7d", "span=4h ")] can't work after upgrade... See more...
timechart [stats count | eval range="$timeRange$" | eval search=case(range=="-6h", "span=30m ", range=="-1d", "span=1h ", range=="-3d", "span=2h ", range=="-7d", "span=4h ")] can't work after upgrade splunk from 8.0.6 to 8.2.5.
Hi I know this is probably an easy one but I'm new and need some help. I have the following Field Called "Account Name" Account Name                                   Alan Test Account            ... See more...
Hi I know this is probably an easy one but I'm new and need some help. I have the following Field Called "Account Name" Account Name                                   Alan Test Account                              Debbie Production Account          John Dev Account                             Ed Test Account                                  I would like to create a new field called Environment that matches Test, Production ,Dev Account Name                                  Environment Alan Test Account                             Test Debbie Production Account         Production John Dev Account                            Dev Ed Test Account                                 Test
Hello, We're running into an issue with a UF sending data to a new metrics index under an app deployed by our deployment server. None of the perfmon inputs are sending data into our new index, and ... See more...
Hello, We're running into an issue with a UF sending data to a new metrics index under an app deployed by our deployment server. None of the perfmon inputs are sending data into our new index, and we're not seeing any errors. We also have the Splunk TA Windows Base app deployed to these same servers, and if we test adjusting the inputs.conf stanzas in the TA app to send perfmon metrics to our new index, it works fine. Below are the input stanzas from both apps: Custom app: ## Process [perfmon://Process] counters = % Processor Time; % User Time- Private disabled = 0 instances = * interval = 60 mode = single object = Process useEnglishOnly=true index = custom_metrics TA Windows Base originally: ## Process [perfmon://Process] counters = % Processor Time; % User Time; % Privileged Time; Virtual Bytes Peak; Virtual Bytes; Page Faults/sec; Working Set Peak; Working Set; Page File Bytes Peak; Page File Bytes; Private Bytes; Thread Count; Priority Base; Elapsed Time; ID Process; Creating Process ID; Pool Paged Bytes; Pool Nonpaged Bytes; Handle Count; IO Read Operations/sec; IO Write Operations/sec; IO Data Operations/sec; IO Other Operations/sec; IO Read Bytes/sec; IO Write Bytes/sec; IO Data Bytes/sec; IO Other Bytes/sec; Working Set - Private disabled = 0 instances = * interval = 10 mode = single object = Process useEnglishOnly=true index = ava_cs_metrics   TA Windows Base changed to point to our new index (which writes to the index fine) : ## Process [perfmon://Process] counters = % Processor Time; % User Time; % Privileged Time; Virtual Bytes Peak; Virtual Bytes; Page Faults/sec; Working Set Peak; Working Set; Page File Bytes Peak; Page File Bytes; Private Bytes; Thread Count; Priority Base; Elapsed Time; ID Process; Creating Process ID; Pool Paged Bytes; Pool Nonpaged Bytes; Handle Count; IO Read Operations/sec; IO Write Operations/sec; IO Data Operations/sec; IO Other Operations/sec; IO Read Bytes/sec; IO Write Bytes/sec; IO Data Bytes/sec; IO Other Bytes/sec; Working Set - Private disabled = 0 instances = * interval = 10 mode = single object = Process useEnglishOnly=true index = custom_metrics  
How do I access and submit Splunk Observability Cloud cases to the Splunk Support Portal? For existing customers who used the Splunk Observability Cloud (SignalFx) Support site, what’s changed about... See more...
How do I access and submit Splunk Observability Cloud cases to the Splunk Support Portal? For existing customers who used the Splunk Observability Cloud (SignalFx) Support site, what’s changed about the support process now that we’ve moved to the Splunk Support Portal?
Hi All, I want to pull AD logs to Splunk Cloud. I see some source about Splunk Add-on for Microsoft Windows 6.0.0 and above which pulls the AD logs and another Add-on also does the same thing. I am... See more...
Hi All, I want to pull AD logs to Splunk Cloud. I see some source about Splunk Add-on for Microsoft Windows 6.0.0 and above which pulls the AD logs and another Add-on also does the same thing. I am confused. Can you point me in the right direction?    Thanks In Advance.  
I am looking for a way to move AK and HI on the dashboard to be viewed with the continental US. I did find the following documentation: https://docs.splunk.com/Documentation/DashApp/0.9.0/DashApp/ma... See more...
I am looking for a way to move AK and HI on the dashboard to be viewed with the continental US. I did find the following documentation: https://docs.splunk.com/Documentation/DashApp/0.9.0/DashApp/mapsChorConfig#:~:text=One%20exception%20is%20the%20placement,increase%20the%20%22y%22%20value.   However when I convert the JSON to XML it states that 'LogicalBounds' is invalid. Is there a way to use this? I do not want to zoom in and out on my splunk dashboard to view data.
I have version 1.76 of the TA-user-agents app installed on my search head for use with searching against web access logs; to prepare for this, I created a field extraction called "http_user_agent" to... See more...
I have version 1.76 of the TA-user-agents app installed on my search head for use with searching against web access logs; to prepare for this, I created a field extraction called "http_user_agent" to extract a string similar to this: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.3 Safari/605.1.15 When I run the following search over a 15 minute timeframe in Fast Mode...   index=foo source=*access* sourcetype=bar | lookup user_agents http_user_agent | search ua_family="Safari" ua_os_family="Mac OS X" | eval browserOS = ua_family . ":" . ua_os_family | timechart count by browserOS limit=0   ...I find that it takes what seems to be a VERY long time to complete (446 seconds to search through 418,000 events). If I just run the base search without the "lookup", "search", "eval" and "timechart", it takes 3.3 seconds to execute against the same number of events. Is this expected behavior for the TA, or is my search not optimized correctly? 
I am having a lot of skipped reports - but I think it is due to the nobody user. Increasing the system setting in limits.conf does not have an impact on reports that runs for NOBODY. On a beta si... See more...
I am having a lot of skipped reports - but I think it is due to the nobody user. Increasing the system setting in limits.conf does not have an impact on reports that runs for NOBODY. On a beta site, I increase the number of max_hist_searches to a huge number but I still get the same errors.  Other users can have unlimited, but because this role is not official am I correct to say I need to give all these alerts. This is the reason I am getting. So i think if i assign the reports to a user, it will fall under a role and then it will work?  
How to write time format for the below  event log  2022-04-07 20:40:03.360 -06:00 [XXX] Hosting starting. 2022-04-07 20:40:03.474 -06:00 [XXX] Hosting starting. 2022-04-07 20:40:03.493 -06:00 [XX... See more...
How to write time format for the below  event log  2022-04-07 20:40:03.360 -06:00 [XXX] Hosting starting. 2022-04-07 20:40:03.474 -06:00 [XXX] Hosting starting. 2022-04-07 20:40:03.493 -06:00 [XXX] Hosting starting. Getting Could not use strptime to parse the time stamp from 2022-04-07 20:40:03.360 -06:00 [Sourcetype] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\d+\-\d+\-\d+\s\d+\:\d+\d:\d+\.\d+\s+[^\]]+\] NO_BINARY_CHECK=true disabled=false TIME_PREFIX=^ TIME_FORMAT=%Y/%m/%d %H:%M:%S.%3N %z    MAX_TIMESTAMP_LOOKAHEAD=31 Kindly guide me to fix this time stamp issue.
I written this query in order to pull not closed tasks from service now index. but its not working. index="servicenow" sourcetype="snow:sc_task" AND sys_class_name="sc_task" | fillnull "UnAssigned"... See more...
I written this query in order to pull not closed tasks from service now index. but its not working. index="servicenow" sourcetype="snow:sc_task" AND sys_class_name="sc_task" | fillnull "UnAssigned" dv_assigned_to | stats latest(*) as * by dv_number | search dv_state!="Closed Complete" AND dv_state!="Closed Incomplete" | table sys_created_on, dv_number, dv_short_description, dv_state, dv_assigned_to | rename dv_number as "Task Ticket#",dv_assigned_to as "Assigned To",dv_short_description as "Short Description" | sort - sys_created_on, dv_number, dv_state | fields - sys_created_on,dv_state Could you please help me.
Hello, I am trying to perform an offline container install of SC4S and keep getting the following error when trying to enable sc4s.service [/usr/lib/systemd/system/sc4s.service:30] Trailing gar... See more...
Hello, I am trying to perform an offline container install of SC4S and keep getting the following error when trying to enable sc4s.service [/usr/lib/systemd/system/sc4s.service:30] Trailing garbage, ignoring. [/usr/lib/systemd/system/sc4s.service:31] Unknown lvalue '-e "SC4S_CONTAINER_HOST' in section 'Service' [/usr/lib/systemd/system/sc4s.service:32] Missing '='. [/usr/lib/systemd/system/sc4s.service:30] Trailing garbage, ignoring. [/usr/lib/systemd/system/sc4s.service:31] Unknown lvalue '-e "SC4S_CONTAINER_HOST' in section 'Service' [/usr/lib/systemd/system/sc4s.service:32] Missing '='. This is the template I am using with modifications suggested in the offline installation guide: [Unit] Description=SC4S Container Wants=NetworkManager.service network-online.target docker.service After=NetworkManager.service network-online.target docker.service Requires=docker.service [Install] WantedBy=multi-user.target [Service] Environment="SC4S_IMAGE=sc4slocal:latest" # Required mount point for syslog-ng persist data (including disk buffer) Environment="SC4S_PERSIST_MOUNT=splunk-sc4s-var:/var/lib/syslog-ng" # Optional mount point for local overrides and configurations; see notes in docs Environment="SC4S_LOCAL_MOUNT=/opt/sc4s/local:/etc/syslog-ng/conf.d/local:z" # Optional mount point for local disk archive (EWMM output) files Environment="SC4S_ARCHIVE_MOUNT=/opt/sc4s/archive:/var/lib/syslog-ng/archive:z" # Map location of TLS custom TLS Environment="SC4S_TLS_MOUNT=/opt/sc4s/tls:/etc/syslog-ng/tls:z" TimeoutStartSec=0 ExecStartPre=/usr/bin/bash -c "/usr/bin/systemctl set-environment SC4SHOST=$(hostname -s)" ExecStart=/usr/bin/docker run \ -e "SC4S_CONTAINER_HOST=${SC4SHOST}" \ -v "$SC4S_PERSIST_MOUNT" \ -v "$SC4S_LOCAL_MOUNT" \ -v "$SC4S_ARCHIVE_MOUNT" \ -v "$SC4S_TLS_MOUNT" \ --env-file=/opt/sc4s/env_file \ --network host \ --name SC4S \ --rm $SC4S_IMAGE Restart=on-abnormal Any advice on what might be wrong with the service file?
Im using a search query to search for data in "all time" but want to display timechart only for last 60 days. If i try to use "earliest=-2mon" it shows the timechart for 2 months but also loses the d... See more...
Im using a search query to search for data in "all time" but want to display timechart only for last 60 days. If i try to use "earliest=-2mon" it shows the timechart for 2 months but also loses the data past 60 days which projects wrong data in timechart.   Current query looks like this        index=data "search criteria" earliest=-2mon | | timechart usenull=f span=1w count by datapoints        
Hi,  I'm trying to figure out how to detect if one of our ecommerce integrations has an error and the transactions drop relative to their normal rate.  For example if we see 200 purchases a week ... See more...
Hi,  I'm trying to figure out how to detect if one of our ecommerce integrations has an error and the transactions drop relative to their normal rate.  For example if we see 200 purchases a week and suddenly it goes down to 10. Each purchase has category and storefront ID fields in the events, so I would want to automatically adapt to new storefronts and categories that appear in the events.  The output I would hope to get is an alert when a combo of storefront x with category y has dropped to 70% or less of its normal event rate. Say in a period of a day or a week.  How can you have a search work in that way, where it compares its results in this period to the previous period?  Thanks
Hello, I have a Splunk ES instance on AWS. All logs are forwarded there from a Splunk HF (full forwarding - no indexing) which collects Active Directory data. Domain is accessible only via VPN. I... See more...
Hello, I have a Splunk ES instance on AWS. All logs are forwarded there from a Splunk HF (full forwarding - no indexing) which collects Active Directory data. Domain is accessible only via VPN. I would like to receive inputs from syslog source (FortiGate firewalls) without installing a sysmon-ng server. How can this happen in order to get the logs to the Cloud? - Shall I set UDP 514 as data input port and HF will automatically forward data to Splunk ES in the Cloud via 9997? Even though I have a FortiGate addon installed on HF, while setting 514 as UDP input with syslog, there no option to specify the app's correct sourcetype. - Can I receive on same UDP 514 port a syslog input from another source and have it properly parsed? Thank you