All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, splunkers, is it possible to have multiselect also be able to type in a string? I can either select from droplist, or directly input a string?   Kevin
Hi,   I am trying to filter out events using props.conf and transforms.conf . I have requirement where there are multiple source log files are present and I need to pick few of them , like ( i can'... See more...
Hi,   I am trying to filter out events using props.conf and transforms.conf . I have requirement where there are multiple source log files are present and I need to pick few of them , like ( i can't use host or sourcetype as they are been shared with other indexes) - source1: ABC/DEF/IJK-YTL/master/dev/jobid18/console source2: ABC/DEF/IJK-YTL/master/dev/jobid19/console and so on .  I have tried following regex but they didnt work ( i still see logs been indexed and not dumped)  props.conf [source::ABC/DEF/IJK-YTL/master/dev/.*?/console] TRANSFORMS-set = setnull OR  option 2 : [source::\ABC\/DEF\/IJK-YTL\/master\/dev\/.*?\/console] Option 3 : [source::.../console] Option 4 : [source::...[/\\]master[/\\]...[/\\]console] Transforms.conf  [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue Can someone please help - SPlunk is on windows platform  
Hi, team can anyone kindly confirm that whether the Splunk light free version support is available or not , if not kindly please suggest us the free version of Splunk to deploy in Kubernetes.
Hi Splunkers, I am getting below error on Clustered Search heard, "The percentage of non high priority searches delayed (88%) over the last 24 hours is very high and exceeded the red thresholds (20... See more...
Hi Splunkers, I am getting below error on Clustered Search heard, "The percentage of non high priority searches delayed (88%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=1615. Total delayed Searches=1430" This issue is particularly seen on one app whose index receives large amount of data. How do I fix this issue? Thanks in advance
I need help with writing an SPL to list all the Middleware reports on the Splunk Ent. & An alert to email me when any report is changed please. Thank very much.
Is there some way (bulk is better) to update the email field in the alert action trigger through the search/rest?
Hey everyone,   Long time user, first time poster. So here's the root of my problem: I'm supporting users in a practice environment who wanted to use Ubuntu 21.10 server to host Splunk. The instal... See more...
Hey everyone,   Long time user, first time poster. So here's the root of my problem: I'm supporting users in a practice environment who wanted to use Ubuntu 21.10 server to host Splunk. The install goes great right up until the point in which users are instructed to run /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 The splunk binary creates the Splunkd.service file as you'd expect. But it doesn't actually work:     root@short-term-test:/home/ayy# systemctl start Splunkd.service Job for Splunkd.service failed because the control process exited with error code. See "systemctl status Splunkd.service" and "journalctl -xeu Splunkd.service" for details.       and of course, the journal doesn't actually tell you anything other than the service failed. But /var/log/syslog gives you enough breadcrumbs to figure out the root the problem:     Jan 11 20:06:18 short-term-test bash[7690]: chown: cannot access '/sys/fs/cgroup/cpu/system.slice/Splunkd.service': No such file or directory     so we go to /etc/systemd/system/Splunkd.service:     root@short-term-test:/home/ayy# cat /etc/systemd/system/Splunkd.service #This unit file replaces the traditional start-up script for systemd #configurations, and is used when enabling boot-start for Splunk on #systemd-based Linux distributions. [Unit] Description=Systemd service file for Splunk, generated by 'splunk enable boot-start' After=network.target [Service] Type=simple Restart=always ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd KillMode=mixed KillSignal=SIGINT TimeoutStopSec=360 LimitNOFILE=65536 SuccessExitStatus=51 52 RestartPreventExitStatus=51 RestartForceExitStatus=52 User=splunk Group=splunk Delegate=true CPUShares=1024 MemoryLimit=2073997312 PermissionsStartOnly=true ExecStartPost=/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/cpu/system.slice/%n" ExecStartPost=/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/memory/system.slice/%n" [Install] WantedBy=multi-user.target       And we see in the ExecStartPost, attempts to change file ownership for all the cgroup files under /sys/fs/cgroup/cpu/system.slice/Splunkd.service, and /sys/fs/cgroup/memory/system.slice/Splunkd.service, where /var/log/syslog says the service is failing. So, I searched through the community posts and discovered this is something of a known problem: https://community.splunk.com/t5/Installation/Systemd-broken-on-new-install/m-p/482885#M10392 https://community.splunk.com/t5/Installation/Why-is-Splunk-not-starting-after-upgrade-to-8/m-p/478826   From these two posts, there appear to be two schools of thought: -splunk hardcodes the cgroup directories it thinks the service should be using. So if they're wrong, find them yourself. From my brief experiences with Ubuntu 21.10 and the new cgroup setup, it appears that the files are located under /sys/fs/cgroup/system.slice/Splunkd.service/ -splunk doesn't need the chown command if /opt/splunk/etc/splunk-launch.conf is set to lower its privileges to the splunk user. (SPLUNK_OS_USER=splunk) If I comment out the ExecStartPost statements from Splunkd.service, and use systemctl daemon-reload to get the system to honor the service file changes, Splunk starts normally, sort of supporting this idea that splunk can handle its cgroup file permissions internally.   This brings me to my actual questions: Would there be any ill-effects of changing the Splunkd.service script to change file ownership of all the files in /sys/fs/cgroup/system.slice/Splunkd.service/ directory to the splunk user? or are there specific files and parameters that I need to leave alone? Can I instead remove/disable the ExecStartPost statements and rely on the SPLUNK_OS_USER parameter in /opt/splunk/etc/splunk-launch.conf instead? Is there some sort of a plan to fix the systemd Splunkd.service file? I understand that Ubuntu 21.10 is NOT a long-term service release BUT I have a feeling that this adoption of cgroups v2, and the new cgroups layout is going to be prevalent change in the next long-term service release (e.g. 22.04 and beyond), and I really don't want to have to do this work-around for every fresh Splunk install in the future.   Thank you for your time and consideration.
Community, If have been out on a deserted island, there were a few NIST vulnerabilities reported that has impacted AppDynamics controllers (saas and on-prem) and a number of Java based agents.   Th... See more...
Community, If have been out on a deserted island, there were a few NIST vulnerabilities reported that has impacted AppDynamics controllers (saas and on-prem) and a number of Java based agents.   The main reason why I am authoring this thread for the community is to hopefully offer my perspective but also humbly request that if I got something wrong or missed something, you take the five minutes to call it out and help myself/others understand better the nature of this beast. Vulnerabilities seem to be falling into three buckets. 1. Directly - Has the Apache log4j jars (core) with a version that is impacted 2. Indirectly - A library jar has the log4j jars within it (nested jars) 3. Diverged - At some point the log4j source was branched and modified to a custom version 1. Case - machine agent within the <agent home>/lib/log4j-core-2.x.jar 2. Don't have a good example of this for AppDynamics 3. Case - machine agent within the <agent-home>/lib/singularity-log4j-1.2.15.6.jar within the  singularity-log4j-1.2.15.6.jar\com\singularity\util\org\apache\log4j\.         Within the third example there is a NOTICE file in the META-INF folder that has "Copyright 2017. AppDynamics modified from Log4j2.  So the Log4J was after the end of life of 1.2.x and without a different detection method, we are unsure if the vulnerabilities that plague us are within this variant (yes, Loki reference) Now we can get into the detection methods.  By File Name For case 1, it is a simple directory listing and search for log4j*.  Case 2 gets a bit messy since each of the JAR files have to be listed to search for Log4j.   The third case, is much like case 2, but we get into a mess since as exampled above the file name is mauled up with "singularity". By JVM Class presence For each of the three cases, this method requires another Java class is injected into the JVM that can then attempt to access the log4j vulnerability, be that to execute a JNDI lookup or to capture and de-seralize a message packet.   By Directed Attack Some of the vulnerabilities have exploits in the wild and those exploits have already been deployed against a number of targets with very destructive results. Now the majority of the vulnerabilities scan vendors are continuing to fine tune their scan patterns but to be 100% sure, IMHO, there really needs to have a Java class injection that pulls the in memory Log4J class references and probes them, especially with the diverged variants. ------- Laundry list of vulnerabilities -------------- https://nvd.nist.gov/vuln/detail/CVE-2021-44228       10.0 Critical Impacts log4J 2.x https://nvd.nist.gov/vuln/detail/CVE-2021-45046       9.0 Critical  Impacts log4j 2.16.x https://nvd.nist.gov/vuln/detail/CVE-2021-45105       5.8 Medium   Impacts log4j 2.17.x https://nvd.nist.gov/vuln/detail/CVE-2021-4104          7.5 HIGH Impacts log4j 1.2.x - basically 1.2 end of life in 2015 but there was another vulnerability found.  "deserialization of untrusted data when the attacker has write access to the Log4j configuration" Thank you, Billy Cole
I am attempting to make a line graph with information from a csv w/ info from the past year.   Nov 2020 December 2020 January 2021 February 2021 Events 19 9 5 7 Cleared 3 1 1 7 ... See more...
I am attempting to make a line graph with information from a csv w/ info from the past year.   Nov 2020 December 2020 January 2021 February 2021 Events 19 9 5 7 Cleared 3 1 1 7 Incidents 3 1 1 0 False Positives 16 8 4 7   I need each category to have its own line on  a line graph. The Months would be on x axis (nov20-nov21). It doesn't seem complicated, but cant seem to get the results .  Any help would be appreciated. Thanks 
Hi, We have a Splunk distributed cluster setup with 3 indexers, 3 search heads, 1 cluster master.   The clusters were healthy when the cluster setup was done. However, after switching the log tra... See more...
Hi, We have a Splunk distributed cluster setup with 3 indexers, 3 search heads, 1 cluster master.   The clusters were healthy when the cluster setup was done. However, after switching the log traffic to Splunk indexer, we see one of the search head detached from the cluster and keeps restarting. Below is the error I see from the splunkd.log for the search head that is having problem. 01-11-2022 17:39:57.495 +0000 INFO MetricSchemaProcessor [1424 typing] - channel confkey=source::/opt/splunk/var/log/introspection/disk_objects.log|host::splunk-shc-search-head-2|splunk_intro_disk_objects|CLONE_CHANNEL has an event with no measure, will be skipped. 01-11-2022 17:39:57.751 +0000 INFO IndexProcessor [1173 MainThread] - handleSignal : Disabling streaming searches. 01-11-2022 17:39:57.752 +0000 INFO IndexProcessor [1173 MainThread] - request state change from=RUN to=SHUTDOWN_SIGNALED 01-11-2022 17:39:57.752 +0000 INFO SHClusterMgr [1173 MainThread] - Starting to Signal shutdown RAFT 01-11-2022 17:39:57.752 +0000 INFO SHCRaftConsensus [1173 MainThread] - Shutdown signal received. 01-11-2022 17:39:57.752 +0000 INFO SHClusterMgr [1173 MainThread] - Signal shutdown RAFT completed 01-11-2022 17:39:57.752 +0000 INFO UiHttpListener [1173 MainThread] - Shutting down webui 01-11-2022 17:39:57.752 +0000 INFO UiHttpListener [1173 MainThread] - Shutting down webui completed   Any insights on what is causing this?  
Hello,  This question has probably been asked and answered, but, I just can't seem to find a best solution;  I have a search that returns N of similar json objects of approx type:  { name: "name"... See more...
Hello,  This question has probably been asked and answered, but, I just can't seem to find a best solution;  I have a search that returns N of similar json objects of approx type:  { name: "name",  id: "id",  somelist: [     {      name: "foo"       value: "bar"      },     {      name: "foo"      value:  "baz"       },    ... ] }   where I want to compare the "somelist" part of every object to another object. In the end write out diff between them to separate column.    Thanks a lot,  Vadim
Hi, I have this on my splunk query index=xxxxxxx sourcetype="xxxxxx" EXPRSSN=IBM4D* | eval DATE=strftime(strptime(DATE,"%d%b%Y"),"%Y-%m-%d") | table EXPRSSN DATE MIPS | eval _time=strptime(DATE." "... See more...
Hi, I have this on my splunk query index=xxxxxxx sourcetype="xxxxxx" EXPRSSN=IBM4D* | eval DATE=strftime(strptime(DATE,"%d%b%Y"),"%Y-%m-%d") | table EXPRSSN DATE MIPS | eval _time=strptime(DATE." "."00:00:00","%Y-%m-%d %H:%M:%S") | chart values(MIPS) over _time by EXPRSSN  I wanted to add a linear trendline on my chart. Hoping I could re-create this   How do I customize also the my line chart? I wanted to have the other one filled as well. I am getting the one in below from splunk  
Requirement- i am trying to create a report based on State of Incident( ticket).  looking for latest State of ticket below is the my search query.  if time range is selected more then "Today". resul... See more...
Requirement- i am trying to create a report based on State of Incident( ticket).  looking for latest State of ticket below is the my search query.  if time range is selected more then "Today". results showing the previous Ticket State as well.  ex Tkt123 current State is Resolved , prior to  resolved State it was "IN PROGRESS".  expected result should show current State of Tkt123 . In below query i am looking for "IN PROGRESS" ticket State in Q_name=IT . but it is showing Tkt123 as well.  when checked Tkt123  in SNOW tool it is resolved status index=SNOW source=SNOW_source Q_name=IT |stats latest(State) AS State BY Number Last_Updated | stats dc(Number) AS Total |search State="IN PROGRESS" |appendpipe [stats count| eval Total="NODATA" |where count==0|table Total] @ITWhisperer 
I've been getting this error for a few weeks. Search peer <indexer> has the following message: Failed to make bucket = main~5360~4D6B6D21-6F08-44EA-B793-EFEB8C344E21 searchable, retry count = 743. ... See more...
I've been getting this error for a few weeks. Search peer <indexer> has the following message: Failed to make bucket = main~5360~4D6B6D21-6F08-44EA-B793-EFEB8C344E21 searchable, retry count = 743. I have a case open with Splunk Support and I wanted to know if their logic or feedback is sound. After stopping the indexer I ran fsck and saw multiple buckets have needed to be rebuilt. They all rebuilt successfully except the one was in the error message.  After sharing this information and the logs with Support I was told that "For a single bucket you can ignore this warning. Data will still be searchable for this bucket. " So, should I think all is well because Support told me so and ignore the Health of Splunk Deployment report that shows the "red exclamation mark" icons for Buckets and Data Durability?    
Hello, i am trying to make life easier for my colleagues by providing filtering to error logs. So i have different types of errors/warnings and want to display the number of the occurrences in the ... See more...
Hello, i am trying to make life easier for my colleagues by providing filtering to error logs. So i have different types of errors/warnings and want to display the number of the occurrences in the checkbox. Something like: <input type="checkbox" token="tok_dummy_6">   <label>Erroneous Calls ($tok_sum_erroneous$)</label>   <choice value="yes">Erroneous Calls ($tok_sum_erroneous$)</choice>   <search>     <query>       index=<myIndex> Trace_ID = $tok_traceid$ error_type="Erroneous Call"       | stats count as Errors     </query>     <done>       <set token="tok_sum_erroneous">$result.Errors$</set>     </done>   </search> </input> the green part works like a charme, but i really do not like the label as it makes no sense if the checkbox itself is actually stating the same and just putting "yes" seems kinda childish. So i want to delete the label and just go with the text for the choice option. Any ideas? I tried double $$ with no luck.   Kind regards, Mike
Hi  I think I have found a bug in Splunk!! I have a table like below, I need to click on different columns and for different actions to happen (drill-down). I have noticed because I have a 5-secon... See more...
Hi  I think I have found a bug in Splunk!! I have a table like below, I need to click on different columns and for different actions to happen (drill-down). I have noticed because I have a 5-second refresh rate on the table when a user clicks on the column the tokens get set 80% of the time and the other 20% value of "null" is set.  Is there a workaround for this I am on 8.2.0. When I changed the refresh to 60 seconds it works all the time, when I put it a 1 second, it never works. The process_serviceName token can get set to the correct value 80% of the time, but "null" can be added to the other 20%. <eval token="process_serviceName">mvindex(split($row.service_name$," # "),0)</eval> I have other columns that work fine, but i think as i am doing a calculation on the value this is why it is not working.     <condition match="$click.name2$==&quot;Process_Name&quot; AND ($row.Service_type$==&quot;agent-based&quot; OR $row.Service_type$==&quot;launcher-based&quot;)"> <!--set token="process_serviceName">$row.service_name$</set--> <eval token="process_serviceName">mvindex(split($row.service_name$," # "),0)</eval> <set token="pid_clicked">$row.PID$</set> <set token="launcher_name_set_from_process_token">*</set> <unset token="Process_historic_graph"></unset> <unset token="Health_Token"></unset> <unset token="Resources_Token"></unset> <unset token="Java_Token"></unset> </condition>      
Hello, I am not getting events from the uptime.sh which gives system date and uptime information via the shell command. This script is a part of Splunk Add-On for Unix and Linux which is installed o... See more...
Hello, I am not getting events from the uptime.sh which gives system date and uptime information via the shell command. This script is a part of Splunk Add-On for Unix and Linux which is installed on the universal forwarder. I am getting data from other inputs like cpu.sh, vmstat.sh, df.sh etc...but not only from uptime.sh. I check the disabled is also set to false and in sync with other stanzas like the stanzas of cpu,vmstat etc. Any insights into if I am missing anything?  
Hi,  I have a problem in my infrastructure the logs are being duplicated, I am trying to identify from which origin (HF, UF, or Syslog) the logs are being sent, worse I have not been successful, any... See more...
Hi,  I have a problem in my infrastructure the logs are being duplicated, I am trying to identify from which origin (HF, UF, or Syslog) the logs are being sent, worse I have not been successful, any search ideas that can identify the origin that sent it , Thanks  
I have multiple artifacts and there is a check box beside it. Is there a datapath to access the currently selected artifact? Or perhaps a means to select it and ONLY run playbook or actions on the se... See more...
I have multiple artifacts and there is a check box beside it. Is there a datapath to access the currently selected artifact? Or perhaps a means to select it and ONLY run playbook or actions on the selected artifacts in the UI? Can't seem to find a datapath or parameter to playbook that does this. Please help!
Hi! I want to display a large number in a table in Splunk dashboard studio, but the format of the number is altered. Example: myfield=2201103670207336994001422000 In the table it is formatted to... See more...
Hi! I want to display a large number in a table in Splunk dashboard studio, but the format of the number is altered. Example: myfield=2201103670207336994001422000 In the table it is formatted to a standard format: 2.201103670207337e+27 When i try to add precision 0 (0) on the field I get this: 2201103670207337000000000000 And when I open the table in search I get the actual value: 2201103670207336994001422000 Have tried to add myfield=tostring(myfield), and other formatting options, but nothing works in the dashboard studio table view. Preferrably this field should be treated as a string, but splunk seems to automatically set the field as a number. Has anyone experienced this before and found a solution?