All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have events like this comin from Heavy forwarder "geo": {"continent": "NA", "country": "UK", "city": "LONDON"}, "hostname": "xxxx xxx xxxx" I have to override the host metadata with the hostn... See more...
I have events like this comin from Heavy forwarder "geo": {"continent": "NA", "country": "UK", "city": "LONDON"}, "hostname": "xxxx xxx xxxx" I have to override the host metadata with the hostname field from the event. my transforms.conf [hostoverride] SOURCE_KEY = hostname REGEX = (.*) DEST_KEY = MetaData:Host FORMAT = host::$1 props.conf [sourcetypename] . . . TRANSFORMS-hostoverride = hostoverride In some of the events I am still getting the Heavy forwarder name.  Thanks for the help in Advance
Using Splunk Enterprise 8.2.4 on Windows and Deployment Server. Does deployment server remover all locally configured apps when it deploys one or more apps to a forwarder? If not, can this be configu... See more...
Using Splunk Enterprise 8.2.4 on Windows and Deployment Server. Does deployment server remover all locally configured apps when it deploys one or more apps to a forwarder? If not, can this be configured. I want to use deployment server and prevent local admins at servers creating their own configs. 
1. How each values can be clickable for a mutivalue cell output. 2. Upon clicking it will show the logs of success/warning/failure 3. How can we color the cells according to the percentage of the s... See more...
1. How each values can be clickable for a mutivalue cell output. 2. Upon clicking it will show the logs of success/warning/failure 3. How can we color the cells according to the percentage of the single value in multi value cell? (Is this achievable without using javascript or css as I dont have admin access in splunk) SuccessCode=389,876  WarningCode=234, FailureCode=809 for the events. cell color Red for Failurepercent> 80%, Yellow for Failurepercent- 20% to 80%, green for Failurepercent 0%-20% Below is the output table format.  
I have noticed that my Splunk Enterprise 8.2.4 (all windows) indexers are listening on TCP 9997 and forwarders are forwarding payloads in plaintext across the network which security are naturally not... See more...
I have noticed that my Splunk Enterprise 8.2.4 (all windows) indexers are listening on TCP 9997 and forwarders are forwarding payloads in plaintext across the network which security are naturally not happy with. So I'd like to use my PKI to issue some certificates for the indexer to start with (I'll worry about client certificates and mutual authentication down the line). I run a master, one search head and and indexer cluster with two nodes. The guides seem to be clear enough on how to create the additional listener etc. but one thing is confusing me.  The guide indicates to create the SSL listener and config under $SPLUNK_HOME/etc/system/local/inputs.conf but on my indexers the existing listener is under  etc\apps\search\local\inputs.conf
Issue if API response returns only one message tracking event. In this scenario, the start date and end date for the next API call would be same as previous call and it becomes like a loop, hence th... See more...
Issue if API response returns only one message tracking event. In this scenario, the start date and end date for the next API call would be same as previous call and it becomes like a loop, hence the script was unable to make next start time and end time if script has some issue and couldn't collect data. lets say 48 hours then to catch up new logs, the script has to make 48 calls (each call collect 1 hours of messages). if the interval of script is 5 minutes then to catch up latest events, the script will take at least 48 calls * 5 minutes = 240 minutes(around 4 hours).
Hi, some questions... Last weekend we've got an error on the indexers. It is a multisite indexers with 6<>6 indexers (each site 6 indexers).  Some indexers went down and the data storage went sky ... See more...
Hi, some questions... Last weekend we've got an error on the indexers. It is a multisite indexers with 6<>6 indexers (each site 6 indexers).  Some indexers went down and the data storage went sky high. Stil not sure why. But when we started the indexers which where down, the data storage went partly back on nomal except one indexer. I noticed a lot off excess buckets...  very very much. I started removing these buckets,  but it stopped on one point and never went further with cleaning.  Could this be because of the data part of this one indexer is full (it is at this moment in automatic detention state). I don't see the activity on the cluster master, so it seems it is finished.. but i can't start a new action to clean, is says "Previously scheduled Remove Excess Buckets is running".  I tried a rolling restart (in maint mode), but it doesn't allow because of the "remove excess buckets is running".. How can i stop this "Previously scheduled Remove Excess Buckets is running"  ? thanks in advance..
I'm a a very basic Splunk admin using Splunk Enterprise 8.2.4 with deployment server pushing out our apps/configs to the forwarders. I need to install the agent onto 100 existing Windows 2016/2019 se... See more...
I'm a a very basic Splunk admin using Splunk Enterprise 8.2.4 with deployment server pushing out our apps/configs to the forwarders. I need to install the agent onto 100 existing Windows 2016/2019 servers. I can easily script up the MSI using MECM or the like but I'm wondering if the Splunk Deployment server can push the agent or if It provides a Powershell script I could hand to my server admins to do same from the target servers? 
Hey All, We are currently transitioning our users from Local to SAML, and with this, the savedsearches/KO's owned by the local users would need to be reassigned as they will soon be deleted on our... See more...
Hey All, We are currently transitioning our users from Local to SAML, and with this, the savedsearches/KO's owned by the local users would need to be reassigned as they will soon be deleted on our environment.   What would be the best practice for this, should we just reassign all these knowledge objects owned by the users to nobody, or should we just assign them to their respective SAML user account equivalent?   The K.O's are general use cases so we're thinking that assigning it to nobody would be fine, but it may cause some quota hits or some searches might not be executed if all are assigned to nobody.
Splunk truncates leading and trailing spaces during field extraction. Normally this is desired behavior. However, for one of our fields, the leading spaces must be preserved. It is about a fileinput... See more...
Splunk truncates leading and trailing spaces during field extraction. Normally this is desired behavior. However, for one of our fields, the leading spaces must be preserved. It is about a fileinput with KV_MODE = none. This should also not involve any manipulation of the field value, such as inserting quotes at the beginning and end of the field value, which would preserve the spaces. Any ideas are welcome - is it possible at all?
Hello, We have a few URLs being monitored by a Splunk alert(query pasted below for reference) by making use of the "Website Monitoring" add on. index=myindex sourcetype="web_ping" [| inputlookup U... See more...
Hello, We have a few URLs being monitored by a Splunk alert(query pasted below for reference) by making use of the "Website Monitoring" add on. index=myindex sourcetype="web_ping" [| inputlookup URL.csv] | streamstats count by response_code url | where count>=2 and response_code>=300 | eval Timestamp=strftime(_time ,"%d/%m/%Y %H:%M:%S"),Status="Failure" | rename response_code as "HTTP Response Code" url as URL | dedup URL | table Timestamp "HTTP Response Code" URL Status  Here the problem is  we are receiving response_code and response_time fields as empty like below  proxy_server="" title=abc.com timed_out=False proxy_port="" url=https://abc.com total_time="" request_time="" timeout=120 response_code="" proxy_type=http can anyone suggest to resolve (troubleshooting steps) this issue.
Hi Is it possible to feed opentelemetry log to "splunk enterprise" and draw trace and span without use Splunk APM?   Thanks,
I have created a search that will trigger if no events from the following search is being returned index=ipl_prod source="e:\\logs\\icc-application.log" sourcetype="log4j:ipl" operationName=hentOppt... See more...
I have created a search that will trigger if no events from the following search is being returned index=ipl_prod source="e:\\logs\\icc-application.log" sourcetype="log4j:ipl" operationName=hentOpptjeningsperioder status=OK Search is only being triggerd during business hours Monday to Friday, problem is that I cannot instruct the cron schedule to not trigger on holidays. Holidays means no activity, so to make it a bit easier to evaluate if this is a false positive or not I want to add to the email being sent statistics of all statuses. Then we know if no other statuses has been found either, it is safe to ignore. index=ipl_prod source="e:\\logs\\icc-application.log" sourcetype="log4j:ipl" operationName=hentOpptjeningsperioder status=OK [if no eventes then subsearch and return those events]
Hey All,   I have data that needs to be ingested with multiple lines similar to the following: ************ Start Display Current Environment ************ ***data*** ***data*** ***data*** ****... See more...
Hey All,   I have data that needs to be ingested with multiple lines similar to the following: ************ Start Display Current Environment ************ ***data*** ***data*** ***data*** ************* End Display Current Environment ************* [13/11/21 5:21:15:183 AEDT] 00000001 ***data*** [13/11/21 5:21:15:276 AEDT] 00000001 ***data*** [13/11/21 5:21:15:278 AEDT] 00000001 ***data*** ************ Start Display Current Environment ************ ***data*** ***data*** ***data*** ************* End Display Current Environment ************* [17/11/21 5:21:15:183 AEDT] 00000001 ***data*** [17/11/21 5:21:15:276 AEDT] 00000001 ***data*** [17/11/21 5:21:15:278 AEDT] 00000001 ***data***   Please note that the Start and End Display current Environment lines are constant in length and how they start but belong to the timestamp after themselves. Is there a way to parse this data?
Is there an SPL idea that allows you to specify multiple conditions with "OR" and assign a control number to the search results for each condition? =====SPL===== index=xxxx sourcetype=yyyy (ip=10.... See more...
Is there an SPL idea that allows you to specify multiple conditions with "OR" and assign a control number to the search results for each condition? =====SPL===== index=xxxx sourcetype=yyyy (ip=10.1.1.10 url=google.com earlest=1642899600 latest=1642900200 ) OR (ip=10.1.1.20 url=facebook.com earlest=1642849200 latest=1642849800 ) OR (ip= ・・・・ =====The expected search results===== NO,ip,url,_time 1,10.1.1.10,google.com/xxx,2022-01-23 10:04:30 2,10.1.1.20,facecook.com/xxxxx,2022-01-22 20:01:00 2,10.1.1.20,facecook.com/xxxxx,2022-01-22 20:01:30 3,・・・・
Hi,   Is it possible to have two different Time Formats? Some logs are having the first time format and other logs are having second time format. Apart from datetime.xml, is there any other way? ... See more...
Hi,   Is it possible to have two different Time Formats? Some logs are having the first time format and other logs are having second time format. Apart from datetime.xml, is there any other way?   2022-01-24 02:27:20.989 2022-01-24T02:27:20.989
Hi Splunkers, We have configured 3 new heavy forwarder in our splunk enterprise where 2 HF was already working. Now we want traffic route from universal forwarder to  all the 5 HF but we are receiv... See more...
Hi Splunkers, We have configured 3 new heavy forwarder in our splunk enterprise where 2 HF was already working. Now we want traffic route from universal forwarder to  all the 5 HF but we are receiving traffic from only old 2 HF but not from 3 newly introduced HF. telnet from UF to HF is working fine and input and output are configured properly. Can any one suggest solution for this.  Thanks.
index=logs  appname="nameofapp " url=somewebsitenamestring     |  stats count by user | sort - count | where count > 100 I would get results of 5 users and i want to initiate a different search usin... See more...
index=logs  appname="nameofapp " url=somewebsitenamestring     |  stats count by user | sort - count | where count > 100 I would get results of 5 users and i want to initiate a different search using the results ,  can you let me know how i can do it  index=logs   appname="appname  " user="here i need those 5 user names  found in the results to be inserted   "    url=*somewebsitenamestring   |   table _time user url   I would prefer to receive 5 individual csv files for each user rather than one file with all 5 user data.   Thanks for your help , please let me know if this is possible     
Getting a strange error when starting my Spring boot application. We have a large number of applications already running AppDynamics without problems but for one of them it does not work. The special... See more...
Getting a strange error when starting my Spring boot application. We have a large number of applications already running AppDynamics without problems but for one of them it does not work. The special thing with this app is that it uses Springs LdapTemplate. The error: j.l.IllegalAccessError: Class javax/naming/directory/InitialDirContext(module java.naming) can not access class com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInterceptorDelegatorBoot(unnamed module 0x00000000EE842658) because module module java.naming does not read module unnamed module 0x00000000EE842658 at j.n.d.InitialDirContext.search(InitialDirContext.java) at o.s.l.c.LdapTemplate$4.executeSearch(LdapTemplate.java:322) at o.s.l.c.LdapTemplate.search(LdapTemplate.java:363) at o.s.l.c.LdapTemplate.search(LdapTemplate.java:328) at o.s.l.c.LdapTemplate.search(LdapTemplate.java:604) at o.s.l.c.LdapTemplate.search(LdapTemplate.java:594) at o.s.l.c.LdapTemplate.search(LdapTemplate.java:482) at o.s.l.c.LdapTemplate.search(LdapTemplate.java:498) at o.s.l.c.LdapTemplate.search(LdapTemplate.java:514) I have trie various combinations of add-opens, add-reads and add-exports to the jvm but nothing helps so far.
I have installed Splunk on a cgroup1/2 hybrid system using "enable boot-start systemd-managed 1" to start it on bootup. Yesterday I switched to a cgroup2 only system by disabling the usage of cgroup... See more...
I have installed Splunk on a cgroup1/2 hybrid system using "enable boot-start systemd-managed 1" to start it on bootup. Yesterday I switched to a cgroup2 only system by disabling the usage of cgroup1 via grub/kernel boot parameters. Now splunk doesn't start anymore due to a file in the cgroup1 file system hierarchy no longer been present:     Jan 22 10:25:54 bigigloo systemd[1]: Stopping Systemd service file for Splunk, generated by 'splunk enable boot-start'... Jan 22 10:30:58 bigigloo systemd[1]: Splunkd.service: Killing process 2847689 (python3.7) with signal SIGKILL. Jan 22 10:30:58 bigigloo systemd[1]: Splunkd.service: Succeeded. Jan 22 10:30:58 bigigloo systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. -- Reboot -- Jan 22 10:36:19 bigigloo systemd[1]: Starting Systemd service file for Splunk, generated by 'splunk enable boot-start'... Jan 22 10:36:19 bigigloo bash[3180]: chown: cannot access '/sys/fs/cgroup/cpu/system.slice/Splunkd.service': No such file or directory Jan 22 10:36:22 bigigloo systemd[1]: Splunkd.service: Control process exited, code=exited, status=1/FAILURE Jan 22 10:36:22 bigigloo systemd[1]: Splunkd.service: Killing process 3393 (sh) with signal SIGKILL. Jan 22 10:36:22 bigigloo systemd[1]: Splunkd.service: Killing process 3408 (sh) with signal SIGKILL. Jan 22 10:36:22 bigigloo systemd[1]: Splunkd.service: Failed with result 'exit-code'. Jan 22 10:36:22 bigigloo systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Jan 22 10:36:22 bigigloo bash[3475]: chown: cannot access '/sys/fs/cgroup/cpu/system.slice/Splunkd.service': No such file or directory Jan 22 10:36:22 bigigloo systemd[1]: Splunkd.service: Scheduled restart job, restart counter is at 1. Jan 22 10:36:23 bigigloo bash[3480]: chown: cannot access '/sys/fs/cgroup/cpu/system.slice/Splunkd.service': No such file or directory Jan 22 10:36:22 bigigloo systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. Jan 22 10:36:22 bigigloo systemd[1]: Starting Systemd service file for Splunk, generated by 'splunk enable boot-start'... Jan 22 10:36:23 bigigloo bash[3496]: chown: cannot access '/sys/fs/cgroup/cpu/system.slice/Splunkd.service': No such file or directory Jan 22 10:36:22 bigigloo systemd[1]: Splunkd.service: Control process exited, code=exited, status=1/FAILURE Jan 22 10:36:22 bigigloo systemd[1]: Splunkd.service: Killing process 3476 (sh) with signal SIGKILL. Jan 22 10:36:22 bigigloo systemd[1]: Splunkd.service: Killing process 3477 (btool) with signal SIGKILL. Jan 22 10:36:22 bigigloo systemd[1]: Splunkd.service: Failed with result 'exit-code'. Jan 22 10:36:22 bigigloo systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Jan 22 10:36:22 bigigloo systemd[1]: Splunkd.service: Scheduled restart job, restart counter is at 2. Jan 22 10:36:22 bigigloo systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. Jan 22 10:36:22 bigigloo systemd[1]: Starting Systemd service file for Splunk, generated by 'splunk enable boot-start'... Jan 22 10:36:22 bigigloo systemd[1]: Splunkd.service: Control process exited, code=exited, status=1/FAILURE Jan 22 10:36:22 bigigloo systemd[1]: Splunkd.service: Killing process 3481 (sh) with signal SIGKILL. Jan 22 10:36:22 bigigloo systemd[1]: Splunkd.service: Killing process 3482 (btool) with signal SIGKILL. Jan 22 10:36:22 bigigloo systemd[1]: Splunkd.service: Failed with result 'exit-code'. Jan 22 10:36:22 bigigloo systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Jan 22 10:36:23 bigigloo systemd[1]: Splunkd.service: Scheduled restart job, restart counter is at 3. Jan 22 10:36:23 bigigloo systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. Jan 22 10:36:23 bigigloo systemd[1]: Starting Systemd service file for Splunk, generated by 'splunk enable boot-start'... Jan 22 10:36:23 bigigloo systemd[1]: Splunkd.service: Control process exited, code=exited, status=1/FAILURE Jan 22 10:36:23 bigigloo systemd[1]: Splunkd.service: Killing process 3497 (sh) with signal SIGKILL. Jan 22 10:36:23 bigigloo systemd[1]: Splunkd.service: Killing process 3499 (btool) with signal SIGKILL. Jan 22 10:36:23 bigigloo systemd[1]: Splunkd.service: Failed with result 'exit-code'. Jan 22 10:36:23 bigigloo systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Jan 22 10:36:23 bigigloo systemd[1]: Splunkd.service: Scheduled restart job, restart counter is at 4. Jan 22 10:36:23 bigigloo systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. Jan 22 10:36:23 bigigloo systemd[1]: Starting Systemd service file for Splunk, generated by 'splunk enable boot-start'...       I tracked the problem down to the two ExecStartPost commands in the unit file /etc/systemd/system/Splunkd.service. Commenting those two fixed the problem.       #This unit file replaces the traditional start-up script for systemd #configurations, and is used when enabling boot-start for Splunk on #systemd-based Linux distributions. [Unit] Description=Systemd service file for Splunk, generated by 'splunk enable boot-start' After=network.target [Service] Type=simple Restart=always ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd KillMode=mixed KillSignal=SIGINT TimeoutStopSec=360 LimitNOFILE=65536 SuccessExitStatus=51 52 RestartPreventExitStatus=51 RestartForceExitStatus=52 User=root Group=root Delegate=true CPUShares=1024 MemoryLimit=20868083712 PermissionsStartOnly=true #ExecStartPost=/bin/bash -c "chown -R root:root /sys/fs/cgroup/cpu/system.slice/%n" #ExecStartPost=/bin/bash -c "chown -R root:root /sys/fs/cgroup/memory/system.slice/%n" [Install] WantedBy=multi-user.target      However, I presume updates of Splunk might restore the files to the old variant again. What do I need to do in order to make the start of Splunk cgroup2 compliant?
My file contains a line at the last where it mentions the return code. The format look like below mentioned. If the job fails, It returns 32 and if the job is successful it returns 0. Main -> ** Exe... See more...
My file contains a line at the last where it mentions the return code. The format look like below mentioned. If the job fails, It returns 32 and if the job is successful it returns 0. Main -> ** Execution completed with returnCode: 0, Can some one help me in building a splunk query to alert me if the code is 32 or 0.