All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is anyone else running into boot-start/permissions issues with the 9.0.0 UF running on Linux using init.d scripts for bootstart? Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Exec... See more...
Is anyone else running into boot-start/permissions issues with the 9.0.0 UF running on Linux using init.d scripts for bootstart? Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforwarder" I am also finding that "./splunk disable boot-start" does not correctly remove the /etc/init.d/splunk script and, contrary to documentation, splunk UF 9.0.0 uses systemd as default. https://docs.splunk.com/Documentation/Splunk/9.0.0/Admin/ConfigureSplunktostartatboottime Also systemd scripts seem to fail getting the permissions needed even when trying to enable-boot as root. A key error I am seeing is "Failed to create the unit file" when running the install. But it seems to be a total fail.     ## When upgrading (from 8.2.5) runuser -l splunk -c "/opt/splunkforwarder/bin/splunk stop" tar -xzvf /tmp/splunkforwarder-9.0.0-6818ac46f2ec-Linux-x86_64.tgz -C /opt chown -R splunk:splunk /opt/splunkforwarder/ runuser -l splunk -c "/opt/splunkforwarder/bin/splunk start --accept-license --answer-yes --no-prompt" runuser -l splunk -c "/opt/splunkforwarder/bin/splunk status" Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforwarder" (NOTE: Seems to be non-impacting)   ### When doing a new install tar -xzvf /tmp/splunkforwarder-9.0.0-6818ac46f2ec-Linux-x86_64.tgz -C /opt chown -R splunk:splunk /opt/splunkforwarder [root]# sudo -H -u splunk /opt/splunkforwarder/bin/splunk start --accept-license --answer-yes --no-prompt Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforwarder" This appears to be your first time running this version of Splunk. IMPORTANT: Because an admin password was not provided, the admin user will not be created. You will have to set up an admin username/password later using user-seed.conf. Creating unit file... Current splunk is running as non root, which cannot operate systemd unit files. Please create it manually by 'sudo splunk enable boot-start' later. Failed to create the unit file. Please do it manually later. Splunk> Now with more code! sudo -H -u splunk /opt/splunkforwarder/bin/splunk status Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforwarder" splunkd is running (PID: 3132350). splunk helpers are running (PIDs: 3132354).   # sudo -H -u splunk /opt/splunkforwarder/bin/splunk stop Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforwarder" Stopping splunkd... Shutting down. Please wait, as this may take a few minutes. . [ OK ] Stopping splunk helpers... [ OK ] Done. # /opt/splunkforwarder/bin/splunk enable boot-start -user splunk Systemd unit file installed by user at /etc/systemd/system/SplunkForwarder.service. Configured as systemd managed service. systemctl start SplunkForwarder.service Job for SplunkForwarder.service failed because the control process exited with error code. See "systemctl status SplunkForwarder.service" and "journalctl -xe" for details. systemctl status SplunkForwarder.service ● SplunkForwarder.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: loaded (/etc/systemd/system/SplunkForwarder.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2022-06-21 12:58:55 UTC; 27s ago Process: 3141480 ExecStartPost=/bin/bash -c chown -R splunk:splunk /sys/fs/cgroup/memory/system.slice/SplunkForwarder.service (code=exited, status=0/SUCCES> Process: 3141478 ExecStartPost=/bin/bash -c chown -R splunk:splunk /sys/fs/cgroup/cpu/system.slice/SplunkForwarder.service (code=exited, status=0/SUCCESS) Process: 3141477 ExecStart=/opt/splunkforwarder/bin/splunk _internal_launch_under_systemd (code=exited, status=203/EXEC) Process: 3141475 ExecStartPre=/bin/bash -c chown -R splunk:splunk /opt/splunkforwarder (code=exited, status=0/SUCCESS) Main PID: 3141477 (code=exited, status=203/EXEC) Jun 21 12:58:55 <host> systemd[1]: SplunkForwarder.service: Failed with result 'exit-code'. Jun 21 12:58:55 <host> systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Jun 21 12:58:55 <host> systemd[1]: SplunkForwarder.service: Service RestartSec=100ms expired, scheduling restart. Jun 21 12:58:55 <host> systemd[1]: SplunkForwarder.service: Scheduled restart job, restart counter is at 5. Jun 21 12:58:55 <host> systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. Jun 21 12:58:55 <host> systemd[1]: SplunkForwarder.service: Start request repeated too quickly. Jun 21 12:58:55 <host> systemd[1]: SplunkForwarder.service: Failed with result 'exit-code'. Jun 21 12:58:55 <host> systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'.
Hi All, I am new to splunk and not a developer so first up apologies for any poor syntax or coding practices. What am I trying to do? The information that i need to show when a batch starts a... See more...
Hi All, I am new to splunk and not a developer so first up apologies for any poor syntax or coding practices. What am I trying to do? The information that i need to show when a batch starts and ends is in different formats in different logs I am trying to come up with a table that shows how long it takes to run each batch of transactions.   What is in the logs? There is a batch id in each of the logs but in a different format so i use regex to extract it. This is what I want to group on There is a unique string in 1 log per batch which contains "Found the last" which is my end time  For each transaction in the batch there is a log which contains ""After payload". If there are 100 entries in the batch there are 100 logs with this message. I want to use the first of these logs as my start time.   How am I trying to do it? I am filtering out any unneccesary logs by only looking for logs that have the message that I want which works source="batch-queue-receiver.log" | rex field=_raw "[\? ]Batch\s?[iI][dD]\s?: (?<Batchid>.{7}+)" | rex field=_raw "[\? ]MerchantId: (?<Merchantid>.{7}+)" | rex field=_raw "[\? ]INFO a.c.s.b.q..+- (?<info>.{14}+)" | where Batchid != "" | where info = "Found the last" OR info = "After payload "  I then want to use transaction to group by batch. This works but because I have multiple entries per batch it takes the last entry not the first so my duration is much smaller than expected source="batch-queue-receiver.log" | rex field=_raw "[\? ]Batch\s?[iI][dD]\s?: (?<Batchid>.{7}+)" | rex field=_raw "[\? ]MerchantId: (?<Merchantid>.{7}+)" | rex field=_raw "[\? ]INFO a.c.s.b.q..+- (?<info>.{14}+)" | where Batchid != "" | where info = "Found the last" OR info = "After payload " | transaction Batchid startswith="After payload conversion" endswith="Found the last message of the batch" mvlist=true| table Batchid duration   I then try to dedup but get no values returned source="batch-queue-receiver.log" | rex field=_raw "[\? ]Batch\s?[iI][dD]\s?: (?<Batchid>.{7}+)" | rex field=_raw "[\? ]MerchantId: (?<Merchantid>.{7}+)" | rex field=_raw "[\? ]INFO a.c.s.b.q..+- (?<info>.{14}+)" | where Batchid != "" | where info = "Found the last" OR info = "After payload " | dedup info Batchid sortby +_time | table Merchantid Batchid _time info _raw | transaction Batchid startswith="After payload conversion" endswith="Found the last message of the batch" mvlist=true| table Batchid duration If I remove the transaction but keep the dedup I get only two messages per batchid (what I want) so I am not sure what is going wrong . It appears that I can't do a transaction after a dedup but it is probably something else I am not aware of. Any help would be appreciated. source="batch-queue-receiver.log" | rex field=_raw "[\? ]Batch\s?[iI][dD]\s?: (?<Batchid>.{7}+)" | rex field=_raw "[\? ]MerchantId: (?<Merchantid>.{7}+)" | rex field=_raw "[\? ]INFO a.c.s.b.q..+- (?<info>.{14}+)" | where Batchid != "" | where info = "Found the last" OR info = "After payload " | dedup info Batchid sortby +_time | table Batchid _time info      
Hi all, Has the old "searchWhenChanged" parameter been brought over into the new Dashboards? If not, is there an alternative I can use to get my dashboard to refresh/search when an input changes ... See more...
Hi all, Has the old "searchWhenChanged" parameter been brought over into the new Dashboards? If not, is there an alternative I can use to get my dashboard to refresh/search when an input changes or if I hit the carriage return/enter key? Thanks in advance.
Hi, Is there a way to target which application lookup you want to use? Lets say there are 3 applications, A, B and C,  where A and B each has device.csv but they have different data in them. De... See more...
Hi, Is there a way to target which application lookup you want to use? Lets say there are 3 applications, A, B and C,  where A and B each has device.csv but they have different data in them. Depending on a requirement Application C needs to use device.csv from A and other times it needs it from B.  That is, I cant use permissions to restrict the lookup, as application C needs access to both. Is it possible to prepend the application to the lookup or csv at search time - so that I know define which lookup I want to access? Something like: | inputlookup A::device.csv - I tried this and it didnt work   regards -brett
Hello, I've been running splunk on VMware since March this year, which is connected to a pfSense. I recently checked on it and I'm not able to acces the web interface anymore (http://localhost:800... See more...
Hello, I've been running splunk on VMware since March this year, which is connected to a pfSense. I recently checked on it and I'm not able to acces the web interface anymore (http://localhost:8000) I've been reading some posts about how some ports need to be opened. I'm not sure if I would have to do it on the pfSense since it was already working early on without needing to open the ports. How should I continue with the troubleshooting? Thanks! Best regards, Sara
when i am trying to create new app in deployment server through bin directory i am getting the following error WARNING: Server Certificate Hostname Validation is disabled. Please see server.c onf/[... See more...
when i am trying to create new app in deployment server through bin directory i am getting the following error WARNING: Server Certificate Hostname Validation is disabled. Please see server.c onf/[sslConfig]/cliVerifyServerName for details. An error occurred: Could not create Splunk settings directory at '/root/.splunk' we recently upgraded to 9.0.0 could you please provide me best solution to resolve this .
Hello, I am working on the back of this question from 2021 that has no answer. I have created a new custom entity type with one vital metric and I can see the information in the Infrastructure Over... See more...
Hello, I am working on the back of this question from 2021 that has no answer. I have created a new custom entity type with one vital metric and I can see the information in the Infrastructure Overview dashboard with no issues for the entities of that type.  When I select a single entity and drill down I do not seem to have an entity dashboard associated as with the OOTB entity types. What are the steps to create an entity dashboard for a specific entity type so that I can see metric data trend when I drill down to a single entity? Thank you! Andrew
we have question once we need to forward the Tripwire logs to Splunk  and I already enable the syslogs on the tripwire and opened the connection but still, nothing found 
Hello,   I'm trying to experiment sending data indexed in splunk to activeMQ. I'll probably need to use JMS Messaging Modular Input (yet to be tested, because I'm trying to output). Did someone do... See more...
Hello,   I'm trying to experiment sending data indexed in splunk to activeMQ. I'll probably need to use JMS Messaging Modular Input (yet to be tested, because I'm trying to output). Did someone do that already ? Could you share some feedback ? Thanks ! Ema
Hello We have multiple people working on the content in Splunk Enterprise Security, and I need to be able to find when Correlation searches were created What is the way to find it?     ... See more...
Hello We have multiple people working on the content in Splunk Enterprise Security, and I need to be able to find when Correlation searches were created What is the way to find it?        
Hi peeps, I need some information about migrating data from an instance in a cluster environment to a new cluster environment. I was unable to find documentation about this process, so I would like... See more...
Hi peeps, I need some information about migrating data from an instance in a cluster environment to a new cluster environment. I was unable to find documentation about this process, so I would like to get some advice or pros/cons details from the experts. Please help. Thank you. 
I'd like to create a base search in a report that will allow me to do a stats count against the different type of values searched for ie... Disk_Error_Count, Power_issue_Count, Severity_Major_Count e... See more...
I'd like to create a base search in a report that will allow me to do a stats count against the different type of values searched for ie... Disk_Error_Count, Power_issue_Count, Severity_Major_Count etc..... index=testindex  | search  ("*$CHKR*" "*disk*")  OR "*Power_Issue*" OR  "*Severity: Major*" OR "*Processor Down*" OR "*TEST Msg" OR "Report Delivery Failure" outputting the values to a lookup.csv I'm trying to prevent the report having to hit the index for the individual counts. I have a dashboard that will then output the counts for visualization.    
There are many app in Splunkbase some from well known companies and developers, so I assume those are safe. What about other apps? Are they reviewed by Splunk before being published?
I created this data table by "mvappend" command. dont have "_time" column and have only 3months records. MONTH itemA itemB itemC 2022-05 2022-06 2022-07 1 2 3 4 5 ... See more...
I created this data table by "mvappend" command. dont have "_time" column and have only 3months records. MONTH itemA itemB itemC 2022-05 2022-06 2022-07 1 2 3 4 5 6 7 8 9   I want to create a column chart : x-axis : MONTH , y-axis : value from this data table. But I cant by  using "chart" command. Please let me know how to create. Sorry if there are any mistakes in this  sentence.
Hi, I tried to filter events on version 2.30.0 based on v1.110.0 configuration, but it failed to dropped events in version 2. I also have read the document but somehow it still not working. maybe som... See more...
Hi, I tried to filter events on version 2.30.0 based on v1.110.0 configuration, but it failed to dropped events in version 2. I also have read the document but somehow it still not working. maybe something that I miss out. kindly advise SC4S V1.110.0 $ cat vendor_product_by_source.csv f_null_queue,sc4s_vendor_product,"null_queue" $ cat vendor_product_by_source.conf filter f_null_queue { host(10.14.1.98) or host(10.14.1.99) or host("uk-test-intfw*" type(glob)) }; Result: Events from above host has been dropped and didn’t see it show in Splunk SC4S v2.30.0 $ cat vendor_product_by_source.csv f_null_queue,sc4s_vendor_product,"null_queue" $ cat vendor_product_by_source.conf filter f_null_queue { host(10.14.1.98) or host(10.14.1.99) or host("uk-test-intfw*" type(glob)) }; Result: With the same statement as V1, events still continues flow into Splunk without filter. I have follow the document and make changed as below $ cat vendor_product_by_source.csv f_cisco_asa,sc4s_vendor_product,cisco_asa f_fortinet_fortios,sc4s_vendor_product,fortinet_fortios $ cat vendor_product_by_source.conf filter f_cisco_asa { host(10.14.1.98) or host(10.14.1.99) }; filter f_fortinet_fortios { host(uk-test-intfw*" type(glob)) };
Hi Team,  I have query, result returned for "dateofBirth" filed is "yyyymmdd" like "19911021", can I format the value return for "dateofBirth" like this: "yyyy/mm/dd": example "1991/10/21"  ... See more...
Hi Team,  I have query, result returned for "dateofBirth" filed is "yyyymmdd" like "19911021", can I format the value return for "dateofBirth" like this: "yyyy/mm/dd": example "1991/10/21"   Below is my query: index=hsl_app | search source = "http:dote-hsl-master-hslcheck" | rex "vaccineFlag\":{\"key\":(?<vaxFlagKey>[0-9]),\"value\":\"(?<vaxFlagValue>[^\"]+)\"}}" | rex max_match=0 "passengerHashedID\":\"(?<passengerHashedID>[^\"]+)" | rex max_match=0 "isCertRequired\":\"(?<isCertRequired>[^\"]+)" | rex max_match=0 "nationalityCode\":\"(?<nationality>[^\"]+)" | rex max_match=0 "birthDate\":\"(?<dateOfBirth>[^\"]+)" | rex "odEligibilityStatus\":\"(?<odEligibilityStatus>[^\"]+)" | rex max_match=0 "\"code\":\"(?<paxErrorCode>[^\"]+)\",\"message\":\"(?<paxErrorMessage>[^\"]+)" | eval paxCert = mvzip(passengerHashedID, isCertRequired, ",") | eval od = mvzip(boardPoint, offPoint, "-") | stats earliest(_time) as _time, values(nationality) as nationality, values(dateOfBirth) as dateOfBirth, values(airlineCode) as airlineCode, values(channelID) as channelID,values(boardPoint) as boardPoint, values(offPoint) as offPoint, values(od) as od, values(odEligibilityStatus) as odEligibilityStatus, values(vaxFlagValue) as vaxFlagValue, list(paxCert) as paxCert, values(paxErrorMessage) as paxErrorMessage, values(APIResStatus) as APIResStatus by requestId | where airlineCode ="SQ" | where isnotnull(paxCert) | mvexpand paxCert | dedup paxCert | eval paxID = mvindex(split(paxCert,","),0), isCertRequired= mvindex(split(paxCert,","),1) | stats latest(_time) as _time, values(vaxFlagValue) as vaxFlagValue, values(nationality) as nationality, values(dateOfBirth) as dateOfBirth, sum(eval(if(isCertRequired="Y", 1, 0))) as eligible, sum(eval(if(isCertRequired="N",1,0))) as notEligible by od | where NOT (vaxFlagValue="NONE" OR vaxFlagValue="NO SUPPORT") AND eligible = 0  
I am using the query below to gather all the request IDs of when an error occurs when calling an api. It provides a list of request ids (over 1000 per hour) in a table format. index=prod_diamond so... See more...
I am using the query below to gather all the request IDs of when an error occurs when calling an api. It provides a list of request ids (over 1000 per hour) in a table format. index=prod_diamond sourcetype=CloudWatch_logs source=*downloadInvoice* AND *error* NOT ("lambda-warmer") | fields error.requestId | rename error.requestId as requestId | stats values by requestId I want to then pass all the values gained from the query into a new query to find what device each requestId is coming from. The new query would look something like this. *requestId_1* OR *requestId_2* OR ....requestId_1000* *ChannelName* *lambda* This query will then be used to find the frequency of each device this error is occurring on. Is there a way to pass all the values retrieved from the first query into the second query within that format?   Please help  
I was wondering if anyone has experience installing the AB on a virtual machine? Is this possible? What are the challenges faced if there are any? There is nothing in the doc about this. Thanks in ad... See more...
I was wondering if anyone has experience installing the AB on a virtual machine? Is this possible? What are the challenges faced if there are any? There is nothing in the doc about this. Thanks in advance.
Hi Team -  Need your expertise in Regex. The below is the rawlog i need to extract the Date and time  the only unique is the WORD "START" & "END" goal is to find the response time between START and... See more...
Hi Team -  Need your expertise in Regex. The below is the rawlog i need to extract the Date and time  the only unique is the WORD "START" & "END" goal is to find the response time between START and END in a Table format. Note: there are no space in the log START</enteringExiting><logLevel>INFO</logLevel><messageType>LOG</messageType><applicationName>GstarSOA</applicationName<programName>GstarRecipientService_MF</programName><functionName>GetRecipient</functionName><host>PerfNode0</host><messageDetails>2022-06-17 04:10:53/utility/logging"><enteringExiting>END</enteringExiting><logLevel>INFO</logLevel><messageType>LOG</messageType><applicationName>GstarSOA</applicationName><programName>GstarRecipientService_MF</programName<functionName>GetRecipient</functionName><host>PerfNode0</host><messageDetails>2022-06-17 04:10:53
We are about to open up a Splunk ticket for this issue, but figured we'd check with the community first. Problem: The tstats command is not seeing all of our indexed data and queries would suggest ... See more...
We are about to open up a Splunk ticket for this issue, but figured we'd check with the community first. Problem: The tstats command is not seeing all of our indexed data and queries would suggest that our Forwarders are not sending data, which isn't true. We've run multiple queries against the index confirming the expected data exists in the index and the fields are indexed. In addition, the hosts show up in the data summary for the index. We are searching within a timeline in which events do exist in the index, so it's not like we are searching for data that never existed. We even performed a restart of the Splunk service and noted a significant number of hosts' data in the index have stopped being processed by tstats / tsidx according to the timestamp of the latest event for the hosts. It coincides with the Splunk restart but never starts processing the data again to be visible by tstats, even after several hours. Other hosts data is processed as expected, so we have some hosts with current "lastSeen" times:     | tstats count max(_time) as lastSeen where index=windows_sec earliest=-20d@d latest=@m by host | convert ctime(lastSeen)     Command that results in missing hosts:     | tstats values(host) by index     Similar command that also results in same "missing" hosts --- Fast Mode:     index=* | stats values(host) by index     Modifying the above command from Fast to Verbose mode results in all hosts being displayed as expected. Additional Info: Splunk v8.2.6 - no correlation between different Forwarder versions either. Splunkd.log has been analyzed line by line pre/post Splunk service restart. No leads there. Tsidx reduction is (and always has been) disabled for all of our indexes. We have seen very similar behavior for other queries where Fast Mode results in missing data but simply changing the mode to Verbose instantly populates all expected data in the results. We even have verified that all fields are identified in the initial "generating" query - no difference in Fast Mode. This seems like a super basic issue but has completely baffled us for some time and is causing serious heartburn and lack of trust in the data being presented to users. It's almost like a caching issue of some sort but we are grasping at straws now. Any thoughts/ideas would be welcome. Thanks.