All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am looking for a way to move AK and HI on the dashboard to be viewed with the continental US. I did find the following documentation: https://docs.splunk.com/Documentation/DashApp/0.9.0/DashApp/ma... See more...
I am looking for a way to move AK and HI on the dashboard to be viewed with the continental US. I did find the following documentation: https://docs.splunk.com/Documentation/DashApp/0.9.0/DashApp/mapsChorConfig#:~:text=One%20exception%20is%20the%20placement,increase%20the%20%22y%22%20value.   However when I convert the JSON to XML it states that 'LogicalBounds' is invalid. Is there a way to use this? I do not want to zoom in and out on my splunk dashboard to view data.
I have version 1.76 of the TA-user-agents app installed on my search head for use with searching against web access logs; to prepare for this, I created a field extraction called "http_user_agent" to... See more...
I have version 1.76 of the TA-user-agents app installed on my search head for use with searching against web access logs; to prepare for this, I created a field extraction called "http_user_agent" to extract a string similar to this: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.3 Safari/605.1.15 When I run the following search over a 15 minute timeframe in Fast Mode...   index=foo source=*access* sourcetype=bar | lookup user_agents http_user_agent | search ua_family="Safari" ua_os_family="Mac OS X" | eval browserOS = ua_family . ":" . ua_os_family | timechart count by browserOS limit=0   ...I find that it takes what seems to be a VERY long time to complete (446 seconds to search through 418,000 events). If I just run the base search without the "lookup", "search", "eval" and "timechart", it takes 3.3 seconds to execute against the same number of events. Is this expected behavior for the TA, or is my search not optimized correctly? 
I am having a lot of skipped reports - but I think it is due to the nobody user. Increasing the system setting in limits.conf does not have an impact on reports that runs for NOBODY. On a beta si... See more...
I am having a lot of skipped reports - but I think it is due to the nobody user. Increasing the system setting in limits.conf does not have an impact on reports that runs for NOBODY. On a beta site, I increase the number of max_hist_searches to a huge number but I still get the same errors.  Other users can have unlimited, but because this role is not official am I correct to say I need to give all these alerts. This is the reason I am getting. So i think if i assign the reports to a user, it will fall under a role and then it will work?  
How to write time format for the below  event log  2022-04-07 20:40:03.360 -06:00 [XXX] Hosting starting. 2022-04-07 20:40:03.474 -06:00 [XXX] Hosting starting. 2022-04-07 20:40:03.493 -06:00 [XX... See more...
How to write time format for the below  event log  2022-04-07 20:40:03.360 -06:00 [XXX] Hosting starting. 2022-04-07 20:40:03.474 -06:00 [XXX] Hosting starting. 2022-04-07 20:40:03.493 -06:00 [XXX] Hosting starting. Getting Could not use strptime to parse the time stamp from 2022-04-07 20:40:03.360 -06:00 [Sourcetype] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\d+\-\d+\-\d+\s\d+\:\d+\d:\d+\.\d+\s+[^\]]+\] NO_BINARY_CHECK=true disabled=false TIME_PREFIX=^ TIME_FORMAT=%Y/%m/%d %H:%M:%S.%3N %z    MAX_TIMESTAMP_LOOKAHEAD=31 Kindly guide me to fix this time stamp issue.
I written this query in order to pull not closed tasks from service now index. but its not working. index="servicenow" sourcetype="snow:sc_task" AND sys_class_name="sc_task" | fillnull "UnAssigned"... See more...
I written this query in order to pull not closed tasks from service now index. but its not working. index="servicenow" sourcetype="snow:sc_task" AND sys_class_name="sc_task" | fillnull "UnAssigned" dv_assigned_to | stats latest(*) as * by dv_number | search dv_state!="Closed Complete" AND dv_state!="Closed Incomplete" | table sys_created_on, dv_number, dv_short_description, dv_state, dv_assigned_to | rename dv_number as "Task Ticket#",dv_assigned_to as "Assigned To",dv_short_description as "Short Description" | sort - sys_created_on, dv_number, dv_state | fields - sys_created_on,dv_state Could you please help me.
Hello, I am trying to perform an offline container install of SC4S and keep getting the following error when trying to enable sc4s.service [/usr/lib/systemd/system/sc4s.service:30] Trailing gar... See more...
Hello, I am trying to perform an offline container install of SC4S and keep getting the following error when trying to enable sc4s.service [/usr/lib/systemd/system/sc4s.service:30] Trailing garbage, ignoring. [/usr/lib/systemd/system/sc4s.service:31] Unknown lvalue '-e "SC4S_CONTAINER_HOST' in section 'Service' [/usr/lib/systemd/system/sc4s.service:32] Missing '='. [/usr/lib/systemd/system/sc4s.service:30] Trailing garbage, ignoring. [/usr/lib/systemd/system/sc4s.service:31] Unknown lvalue '-e "SC4S_CONTAINER_HOST' in section 'Service' [/usr/lib/systemd/system/sc4s.service:32] Missing '='. This is the template I am using with modifications suggested in the offline installation guide: [Unit] Description=SC4S Container Wants=NetworkManager.service network-online.target docker.service After=NetworkManager.service network-online.target docker.service Requires=docker.service [Install] WantedBy=multi-user.target [Service] Environment="SC4S_IMAGE=sc4slocal:latest" # Required mount point for syslog-ng persist data (including disk buffer) Environment="SC4S_PERSIST_MOUNT=splunk-sc4s-var:/var/lib/syslog-ng" # Optional mount point for local overrides and configurations; see notes in docs Environment="SC4S_LOCAL_MOUNT=/opt/sc4s/local:/etc/syslog-ng/conf.d/local:z" # Optional mount point for local disk archive (EWMM output) files Environment="SC4S_ARCHIVE_MOUNT=/opt/sc4s/archive:/var/lib/syslog-ng/archive:z" # Map location of TLS custom TLS Environment="SC4S_TLS_MOUNT=/opt/sc4s/tls:/etc/syslog-ng/tls:z" TimeoutStartSec=0 ExecStartPre=/usr/bin/bash -c "/usr/bin/systemctl set-environment SC4SHOST=$(hostname -s)" ExecStart=/usr/bin/docker run \ -e "SC4S_CONTAINER_HOST=${SC4SHOST}" \ -v "$SC4S_PERSIST_MOUNT" \ -v "$SC4S_LOCAL_MOUNT" \ -v "$SC4S_ARCHIVE_MOUNT" \ -v "$SC4S_TLS_MOUNT" \ --env-file=/opt/sc4s/env_file \ --network host \ --name SC4S \ --rm $SC4S_IMAGE Restart=on-abnormal Any advice on what might be wrong with the service file?
Im using a search query to search for data in "all time" but want to display timechart only for last 60 days. If i try to use "earliest=-2mon" it shows the timechart for 2 months but also loses the d... See more...
Im using a search query to search for data in "all time" but want to display timechart only for last 60 days. If i try to use "earliest=-2mon" it shows the timechart for 2 months but also loses the data past 60 days which projects wrong data in timechart.   Current query looks like this        index=data "search criteria" earliest=-2mon | | timechart usenull=f span=1w count by datapoints        
Hi,  I'm trying to figure out how to detect if one of our ecommerce integrations has an error and the transactions drop relative to their normal rate.  For example if we see 200 purchases a week ... See more...
Hi,  I'm trying to figure out how to detect if one of our ecommerce integrations has an error and the transactions drop relative to their normal rate.  For example if we see 200 purchases a week and suddenly it goes down to 10. Each purchase has category and storefront ID fields in the events, so I would want to automatically adapt to new storefronts and categories that appear in the events.  The output I would hope to get is an alert when a combo of storefront x with category y has dropped to 70% or less of its normal event rate. Say in a period of a day or a week.  How can you have a search work in that way, where it compares its results in this period to the previous period?  Thanks
Hello, I have a Splunk ES instance on AWS. All logs are forwarded there from a Splunk HF (full forwarding - no indexing) which collects Active Directory data. Domain is accessible only via VPN. I... See more...
Hello, I have a Splunk ES instance on AWS. All logs are forwarded there from a Splunk HF (full forwarding - no indexing) which collects Active Directory data. Domain is accessible only via VPN. I would like to receive inputs from syslog source (FortiGate firewalls) without installing a sysmon-ng server. How can this happen in order to get the logs to the Cloud? - Shall I set UDP 514 as data input port and HF will automatically forward data to Splunk ES in the Cloud via 9997? Even though I have a FortiGate addon installed on HF, while setting 514 as UDP input with syslog, there no option to specify the app's correct sourcetype. - Can I receive on same UDP 514 port a syslog input from another source and have it properly parsed? Thank you  
Hello, I have a Splunk ES instance on AWS. All logs are forwarded there from a Splunk HF (full forwarding - no indexing) which collects Active Directory data. Domain is accessible only via VPN. I... See more...
Hello, I have a Splunk ES instance on AWS. All logs are forwarded there from a Splunk HF (full forwarding - no indexing) which collects Active Directory data. Domain is accessible only via VPN. I would like to populate Assets and Identities in ES. Since Cloud instance cannot access the domain, the only way I can think of is using SA-LDAPSearch on Heavy Forwarder. I set it up and successfully connects to LDAP. Question: How can I push the logs and create the lookup tables that will eventually populate the Assets and Identities in ES? Thanks!
Hello!   I have a search table that matches some values and users, like this: is_old_OS_version username true Bob false Marie true Alice   I want to se... See more...
Hello!   I have a search table that matches some values and users, like this: is_old_OS_version username true Bob false Marie true Alice   I want to send alerts to slack only to Bob and Alice and not to Marie. I know that I need a slack application and I have already made it. But how to integrate splunk with this application and chose only to mention the persons I need. Basically I have 2 strategies here: 1. Send to some channel and mention person I need with @ (not the best option, because I will mention lot's of persons with old software in one place) 2. Send directly to the person There are multiple splunk applications that helps integrate with slack, but as I see, I can only choose one channel ID for alert, but I need to dynamically change this ID or find another way.
Hi there, I have trying to use spath to try to extract fields inside a string. Currently, the string has this format..     stringField=["fieldOne": "fieldValue", "fieldTwo": "fieldValue", "fi... See more...
Hi there, I have trying to use spath to try to extract fields inside a string. Currently, the string has this format..     stringField=["fieldOne": "fieldValue", "fieldTwo": "fieldValue", "fieldThree": "fieldValue"]     So, my string inside has some kinda of array with key value pairs. I would to be able to extract those fields and values in a way that I can use their information for my queries.  I would like to be able to get the value of fieldOne by just calling the fieldOne variable/object to get it's value to perform my desire task/stats and so on..  I was trying something like... but no luck!     search... | spath input=stringField search... | eval newVariable=spath(_raw,'stringField') search... | spath search... | spath path=stringField output=newField     The first option of just using input with the spath command only gave me back the first field inside my string. And it was listed as {} field. I would really appreciate the help!
Hi, Our current requirement is to install 2 UF's of version 8.0.2 and 8.0.6 version in one single Windows VM Server. We installed first UF in a normal way by following the splunk doc and When we ... See more...
Hi, Our current requirement is to install 2 UF's of version 8.0.2 and 8.0.6 version in one single Windows VM Server. We installed first UF in a normal way by following the splunk doc and When we try to install 2nd UF, it is installing in the same directory and updating the already installed UF version. Is it possible to run two UF together in windows server? If so ,please let me know the steps and procedure to install. I found this link have anyone tried and did it work by following these steps?https://www.splunk.com/en_us/blog/tips-and-tricks/running-two-universal-forwarders-on-windows.html Thanks
Hello Splunkers, I want to optimize my splunk search. I have attached the screenshot of my search. From the raw data i am retreving the services name in or condition. I don't want to hardcore all t... See more...
Hello Splunkers, I want to optimize my splunk search. I have attached the screenshot of my search. From the raw data i am retreving the services name in or condition. I don't want to hardcore all the services name by using OR clause. Please give me some suggestions, how can i optimise the search with using OR clause.
Hi All I'm very new to Splunk can someone help me after how many days the data will transfer from hot bucket to warm bucket.  Note: default is 90 days that I know but I want proof which I need to... See more...
Hi All I'm very new to Splunk can someone help me after how many days the data will transfer from hot bucket to warm bucket.  Note: default is 90 days that I know but I want proof which I need to show so can someone guide me from where I could find this. Thank you in advance!!
Hi All, I need to understand how the standard deviation and moving average are calculated in AppDynamics for slow and very slow transaction thresholds? Please help with the examples.  Is it pos... See more...
Hi All, I need to understand how the standard deviation and moving average are calculated in AppDynamics for slow and very slow transaction thresholds? Please help with the examples.  Is it possible to do the manual calculation of standard deviation and moving average by using the reference values from Appdynamics for a few of the transactions? ^ Post edited by @Ryan.Paredez for formatting and clarity
Hi All, I have configured 99th percentile response time in the configuration tab under slow transaction thresholds for business transactions. The 99th percentile response time metric is not ava... See more...
Hi All, I have configured 99th percentile response time in the configuration tab under slow transaction thresholds for business transactions. The 99th percentile response time metric is not available/visible under the metric browser for the business transaction.  Does it need any extra steps after configuring the metric under slow transaction thresholds to reflect under Metric browser metrics? Regards Naveen D
  Splunk connect for-kubernetes and I have been tryingto forward the XML file logs to splunk with this splunk-connect-for-kubernetes repo. Can you please help on path of the XML  log file... See more...
  Splunk connect for-kubernetes and I have been tryingto forward the XML file logs to splunk with this splunk-connect-for-kubernetes repo. Can you please help on path of the XML  log files  and Confirigation of the container  XML log file # path of logfiles, default /var/log/containers/*.log # Configurations for container logs containers: ? # Path to root directory of container logs path: ? # Final volume destination of container log symlinks pathDest: ?
Hello there, I am working on VMware, I have two linux machines that I'm using as universal forwarders (ubuntu desktop and a linux server that are configured in the exact same way as forwarders). I ... See more...
Hello there, I am working on VMware, I have two linux machines that I'm using as universal forwarders (ubuntu desktop and a linux server that are configured in the exact same way as forwarders). I have another linux machine that I'm using as an indexer. The thing is that one of my forwarders (linux server) is forwarding correctly to the indexer, and i can see all the information i need in the index main. BUT the second forwarder logs are nowhere to be found. Although I can see the 2nd universal forwarder when I type index=_internal in the search bar but this index doesn't show any logs. Can someone help me please so I can see the logs of the second forwarders logs? Have a great day everyone! Abir
Hello, My issue is in my dashboard continues to load the old .js, in network calls I see it's call with a "version", like: /static/@ab..../handler.js When I browse the url deleting the version n... See more...
Hello, My issue is in my dashboard continues to load the old .js, in network calls I see it's call with a "version", like: /static/@ab..../handler.js When I browse the url deleting the version number I see my new script has been correctly installed, so I tried to call _bump, but even when the url is found, it loads an empty page (Without the button bump like in enterprise version), and my script continues to be wrongly loaded. Does anyone know if it's possible to bump on cloud? I tried by changing the build number in the app.conf But no use. Thankyou..