All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all I want to create a lookup file with an owner, in a specific App, and vith sharing = App.   I used the command  :  | outputlookup create_context=user  file.csv I'm using the endpoint  : /... See more...
Hello all I want to create a lookup file with an owner, in a specific App, and vith sharing = App.   I used the command  :  | outputlookup create_context=user  file.csv I'm using the endpoint  : /servicesNS/<username>/<namespace>/search/jobs   It creates a lookup file with an owner, in a the specific App but with sharing = Private and the file is not visible to other app users.    Does anyone have a solution to this please ? 
Hello all,  is it possible to modify the permissions on a file via the API? Or is there a way to do it different from the classic method?   Thanks all
  Info: Bounced: DCID 8413617 MID 19338947 From: <MariaDubois@example.com> To: <abcdef@buttercupgames.com> RID 0 - 5.4.7 - Delivery expired (message too old) ('000', ['timeout']) O/p: from_mail_id... See more...
  Info: Bounced: DCID 8413617 MID 19338947 From: <MariaDubois@example.com> To: <abcdef@buttercupgames.com> RID 0 - 5.4.7 - Delivery expired (message too old) ('000', ['timeout']) O/p: from_mail_id = MariaDubois@example.com to_mail_id = abcdef@buttercupgames.com Please help me with the Solution ,Thanks 
Hi, I am not able to change the color of graph , tried multiple options in source code , everytime the same color is reflecting . I want three different colors for 3 different results ( in this case ... See more...
Hi, I am not able to change the color of graph , tried multiple options in source code , everytime the same color is reflecting . I want three different colors for 3 different results ( in this case - succeeded , failed , aborted)  Query  </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">collapsed</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.maximumNumber">500</option> <option name="charting.axisY.minimumNumber">0</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"succeeded": 0x425b3c, "failed": 0x5b3c53, "aborted": 0xc98b06}</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">none</option> <option name="charting.lineWidth">2</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">0</option> <option name="trellis.size">small</option> <option name="trellis.splitBy">projects</option> </chart>  
Hello, It would be great if the Servers 'search box' feature within the AppD Controller (doesn't have to just be with Servers search) offered more searching options, perhaps being able to 'wildcard'... See more...
Hello, It would be great if the Servers 'search box' feature within the AppD Controller (doesn't have to just be with Servers search) offered more searching options, perhaps being able to 'wildcard' characters so certain servers would show OR even be able to use Regex to highlight a particular list. Would this be the place to recommend further enhancements to the Controller..? Thanks, Tim
Greetings!!   I need help!!! am experiencing an error while am doing search, the error is: Search peer Splkidx04 has the following message: The minimum free disk space (5000MB) reached for /opt/... See more...
Greetings!!   I need help!!! am experiencing an error while am doing search, the error is: Search peer Splkidx04 has the following message: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch.   Problem replicating config (bundle) to search peer ' 10.10.x.96:8089 ', HTTP response code 500 (HTTP/1.1 500 Error writing to /opt/splunk/var/run/searchpeers/Splunksh1-1641956462.bundle.4ef204fd344a6181.tmp: No space left on device). Error writing to /opt/splunk/var/run/searchpeers/Splunksh01-1641956462.bundle.4ef204fd344a6181.tmp: No space left on device (Unknown write error) . 1/12/2022, 10:44:22 AM The search process with sid=scheduler__pacyn__search__RMD5837e19b530431259_at_1641973200_94478 on peer=Spkidx4 might have returned partial results due to a reading error while waiting for the peer. This can occur if the peer unexpectedly closes or resets the connection during a planned restart. Try running the search again. Learn more. 1/12/2022, 9:43:09 AM Search peer Splkidx4 has the following message: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch. 1/12/2022, 10:59:00 AM   5 errors has occurred while the search was executing. therefore search results might be incomplete. hide .....   Kindly help me on how i can fix this above issue.  Thank you in advance!    
Is there a way to add a field to an event from a different event assuming they have a common key using a simple search (without using pipe)? The reason being the resulting event will need to be tagge... See more...
Is there a way to add a field to an event from a different event assuming they have a common key using a simple search (without using pipe)? The reason being the resulting event will need to be tagged via event type (which doesn't allow complex search) so it can be included as part of a data model. For example, Event 1 - field A (common key): ABC, field B: Sunny Event 2 - field A (common key): ABC, field C: Morning Resulting Event 1: field A: ABC, field B: Sunny, field C: Morning The final event will then be tagged so it can be included in the data model. Appreciate any advice/suggestion.
Hi, i need help to extract word from a string   string Security agent installation attempted Endpoint: (Not Found) Security agent intstallation attempted Endpoint: hostname   result Not Found... See more...
Hi, i need help to extract word from a string   string Security agent installation attempted Endpoint: (Not Found) Security agent intstallation attempted Endpoint: hostname   result Not Found hostname   how can i construct a regular expression to extract out what i wanted?  
Hi, splunkers, is it possible to have multiselect also be able to type in a string? I can either select from droplist, or directly input a string?   Kevin
Hi,   I am trying to filter out events using props.conf and transforms.conf . I have requirement where there are multiple source log files are present and I need to pick few of them , like ( i can'... See more...
Hi,   I am trying to filter out events using props.conf and transforms.conf . I have requirement where there are multiple source log files are present and I need to pick few of them , like ( i can't use host or sourcetype as they are been shared with other indexes) - source1: ABC/DEF/IJK-YTL/master/dev/jobid18/console source2: ABC/DEF/IJK-YTL/master/dev/jobid19/console and so on .  I have tried following regex but they didnt work ( i still see logs been indexed and not dumped)  props.conf [source::ABC/DEF/IJK-YTL/master/dev/.*?/console] TRANSFORMS-set = setnull OR  option 2 : [source::\ABC\/DEF\/IJK-YTL\/master\/dev\/.*?\/console] Option 3 : [source::.../console] Option 4 : [source::...[/\\]master[/\\]...[/\\]console] Transforms.conf  [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue Can someone please help - SPlunk is on windows platform  
Hi, team can anyone kindly confirm that whether the Splunk light free version support is available or not , if not kindly please suggest us the free version of Splunk to deploy in Kubernetes.
Hi Splunkers, I am getting below error on Clustered Search heard, "The percentage of non high priority searches delayed (88%) over the last 24 hours is very high and exceeded the red thresholds (20... See more...
Hi Splunkers, I am getting below error on Clustered Search heard, "The percentage of non high priority searches delayed (88%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=1615. Total delayed Searches=1430" This issue is particularly seen on one app whose index receives large amount of data. How do I fix this issue? Thanks in advance
I need help with writing an SPL to list all the Middleware reports on the Splunk Ent. & An alert to email me when any report is changed please. Thank very much.
Is there some way (bulk is better) to update the email field in the alert action trigger through the search/rest?
Hey everyone,   Long time user, first time poster. So here's the root of my problem: I'm supporting users in a practice environment who wanted to use Ubuntu 21.10 server to host Splunk. The instal... See more...
Hey everyone,   Long time user, first time poster. So here's the root of my problem: I'm supporting users in a practice environment who wanted to use Ubuntu 21.10 server to host Splunk. The install goes great right up until the point in which users are instructed to run /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 The splunk binary creates the Splunkd.service file as you'd expect. But it doesn't actually work:     root@short-term-test:/home/ayy# systemctl start Splunkd.service Job for Splunkd.service failed because the control process exited with error code. See "systemctl status Splunkd.service" and "journalctl -xeu Splunkd.service" for details.       and of course, the journal doesn't actually tell you anything other than the service failed. But /var/log/syslog gives you enough breadcrumbs to figure out the root the problem:     Jan 11 20:06:18 short-term-test bash[7690]: chown: cannot access '/sys/fs/cgroup/cpu/system.slice/Splunkd.service': No such file or directory     so we go to /etc/systemd/system/Splunkd.service:     root@short-term-test:/home/ayy# cat /etc/systemd/system/Splunkd.service #This unit file replaces the traditional start-up script for systemd #configurations, and is used when enabling boot-start for Splunk on #systemd-based Linux distributions. [Unit] Description=Systemd service file for Splunk, generated by 'splunk enable boot-start' After=network.target [Service] Type=simple Restart=always ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd KillMode=mixed KillSignal=SIGINT TimeoutStopSec=360 LimitNOFILE=65536 SuccessExitStatus=51 52 RestartPreventExitStatus=51 RestartForceExitStatus=52 User=splunk Group=splunk Delegate=true CPUShares=1024 MemoryLimit=2073997312 PermissionsStartOnly=true ExecStartPost=/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/cpu/system.slice/%n" ExecStartPost=/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/memory/system.slice/%n" [Install] WantedBy=multi-user.target       And we see in the ExecStartPost, attempts to change file ownership for all the cgroup files under /sys/fs/cgroup/cpu/system.slice/Splunkd.service, and /sys/fs/cgroup/memory/system.slice/Splunkd.service, where /var/log/syslog says the service is failing. So, I searched through the community posts and discovered this is something of a known problem: https://community.splunk.com/t5/Installation/Systemd-broken-on-new-install/m-p/482885#M10392 https://community.splunk.com/t5/Installation/Why-is-Splunk-not-starting-after-upgrade-to-8/m-p/478826   From these two posts, there appear to be two schools of thought: -splunk hardcodes the cgroup directories it thinks the service should be using. So if they're wrong, find them yourself. From my brief experiences with Ubuntu 21.10 and the new cgroup setup, it appears that the files are located under /sys/fs/cgroup/system.slice/Splunkd.service/ -splunk doesn't need the chown command if /opt/splunk/etc/splunk-launch.conf is set to lower its privileges to the splunk user. (SPLUNK_OS_USER=splunk) If I comment out the ExecStartPost statements from Splunkd.service, and use systemctl daemon-reload to get the system to honor the service file changes, Splunk starts normally, sort of supporting this idea that splunk can handle its cgroup file permissions internally.   This brings me to my actual questions: Would there be any ill-effects of changing the Splunkd.service script to change file ownership of all the files in /sys/fs/cgroup/system.slice/Splunkd.service/ directory to the splunk user? or are there specific files and parameters that I need to leave alone? Can I instead remove/disable the ExecStartPost statements and rely on the SPLUNK_OS_USER parameter in /opt/splunk/etc/splunk-launch.conf instead? Is there some sort of a plan to fix the systemd Splunkd.service file? I understand that Ubuntu 21.10 is NOT a long-term service release BUT I have a feeling that this adoption of cgroups v2, and the new cgroups layout is going to be prevalent change in the next long-term service release (e.g. 22.04 and beyond), and I really don't want to have to do this work-around for every fresh Splunk install in the future.   Thank you for your time and consideration.
Community, If have been out on a deserted island, there were a few NIST vulnerabilities reported that has impacted AppDynamics controllers (saas and on-prem) and a number of Java based agents.   Th... See more...
Community, If have been out on a deserted island, there were a few NIST vulnerabilities reported that has impacted AppDynamics controllers (saas and on-prem) and a number of Java based agents.   The main reason why I am authoring this thread for the community is to hopefully offer my perspective but also humbly request that if I got something wrong or missed something, you take the five minutes to call it out and help myself/others understand better the nature of this beast. Vulnerabilities seem to be falling into three buckets. 1. Directly - Has the Apache log4j jars (core) with a version that is impacted 2. Indirectly - A library jar has the log4j jars within it (nested jars) 3. Diverged - At some point the log4j source was branched and modified to a custom version 1. Case - machine agent within the <agent home>/lib/log4j-core-2.x.jar 2. Don't have a good example of this for AppDynamics 3. Case - machine agent within the <agent-home>/lib/singularity-log4j-1.2.15.6.jar within the  singularity-log4j-1.2.15.6.jar\com\singularity\util\org\apache\log4j\.         Within the third example there is a NOTICE file in the META-INF folder that has "Copyright 2017. AppDynamics modified from Log4j2.  So the Log4J was after the end of life of 1.2.x and without a different detection method, we are unsure if the vulnerabilities that plague us are within this variant (yes, Loki reference) Now we can get into the detection methods.  By File Name For case 1, it is a simple directory listing and search for log4j*.  Case 2 gets a bit messy since each of the JAR files have to be listed to search for Log4j.   The third case, is much like case 2, but we get into a mess since as exampled above the file name is mauled up with "singularity". By JVM Class presence For each of the three cases, this method requires another Java class is injected into the JVM that can then attempt to access the log4j vulnerability, be that to execute a JNDI lookup or to capture and de-seralize a message packet.   By Directed Attack Some of the vulnerabilities have exploits in the wild and those exploits have already been deployed against a number of targets with very destructive results. Now the majority of the vulnerabilities scan vendors are continuing to fine tune their scan patterns but to be 100% sure, IMHO, there really needs to have a Java class injection that pulls the in memory Log4J class references and probes them, especially with the diverged variants. ------- Laundry list of vulnerabilities -------------- https://nvd.nist.gov/vuln/detail/CVE-2021-44228       10.0 Critical Impacts log4J 2.x https://nvd.nist.gov/vuln/detail/CVE-2021-45046       9.0 Critical  Impacts log4j 2.16.x https://nvd.nist.gov/vuln/detail/CVE-2021-45105       5.8 Medium   Impacts log4j 2.17.x https://nvd.nist.gov/vuln/detail/CVE-2021-4104          7.5 HIGH Impacts log4j 1.2.x - basically 1.2 end of life in 2015 but there was another vulnerability found.  "deserialization of untrusted data when the attacker has write access to the Log4j configuration" Thank you, Billy Cole
I am attempting to make a line graph with information from a csv w/ info from the past year.   Nov 2020 December 2020 January 2021 February 2021 Events 19 9 5 7 Cleared 3 1 1 7 ... See more...
I am attempting to make a line graph with information from a csv w/ info from the past year.   Nov 2020 December 2020 January 2021 February 2021 Events 19 9 5 7 Cleared 3 1 1 7 Incidents 3 1 1 0 False Positives 16 8 4 7   I need each category to have its own line on  a line graph. The Months would be on x axis (nov20-nov21). It doesn't seem complicated, but cant seem to get the results .  Any help would be appreciated. Thanks 
Hi, We have a Splunk distributed cluster setup with 3 indexers, 3 search heads, 1 cluster master.   The clusters were healthy when the cluster setup was done. However, after switching the log tra... See more...
Hi, We have a Splunk distributed cluster setup with 3 indexers, 3 search heads, 1 cluster master.   The clusters were healthy when the cluster setup was done. However, after switching the log traffic to Splunk indexer, we see one of the search head detached from the cluster and keeps restarting. Below is the error I see from the splunkd.log for the search head that is having problem. 01-11-2022 17:39:57.495 +0000 INFO MetricSchemaProcessor [1424 typing] - channel confkey=source::/opt/splunk/var/log/introspection/disk_objects.log|host::splunk-shc-search-head-2|splunk_intro_disk_objects|CLONE_CHANNEL has an event with no measure, will be skipped. 01-11-2022 17:39:57.751 +0000 INFO IndexProcessor [1173 MainThread] - handleSignal : Disabling streaming searches. 01-11-2022 17:39:57.752 +0000 INFO IndexProcessor [1173 MainThread] - request state change from=RUN to=SHUTDOWN_SIGNALED 01-11-2022 17:39:57.752 +0000 INFO SHClusterMgr [1173 MainThread] - Starting to Signal shutdown RAFT 01-11-2022 17:39:57.752 +0000 INFO SHCRaftConsensus [1173 MainThread] - Shutdown signal received. 01-11-2022 17:39:57.752 +0000 INFO SHClusterMgr [1173 MainThread] - Signal shutdown RAFT completed 01-11-2022 17:39:57.752 +0000 INFO UiHttpListener [1173 MainThread] - Shutting down webui 01-11-2022 17:39:57.752 +0000 INFO UiHttpListener [1173 MainThread] - Shutting down webui completed   Any insights on what is causing this?  
Hello,  This question has probably been asked and answered, but, I just can't seem to find a best solution;  I have a search that returns N of similar json objects of approx type:  { name: "name"... See more...
Hello,  This question has probably been asked and answered, but, I just can't seem to find a best solution;  I have a search that returns N of similar json objects of approx type:  { name: "name",  id: "id",  somelist: [     {      name: "foo"       value: "bar"      },     {      name: "foo"      value:  "baz"       },    ... ] }   where I want to compare the "somelist" part of every object to another object. In the end write out diff between them to separate column.    Thanks a lot,  Vadim
Hi, I have this on my splunk query index=xxxxxxx sourcetype="xxxxxx" EXPRSSN=IBM4D* | eval DATE=strftime(strptime(DATE,"%d%b%Y"),"%Y-%m-%d") | table EXPRSSN DATE MIPS | eval _time=strptime(DATE." "... See more...
Hi, I have this on my splunk query index=xxxxxxx sourcetype="xxxxxx" EXPRSSN=IBM4D* | eval DATE=strftime(strptime(DATE,"%d%b%Y"),"%Y-%m-%d") | table EXPRSSN DATE MIPS | eval _time=strptime(DATE." "."00:00:00","%Y-%m-%d %H:%M:%S") | chart values(MIPS) over _time by EXPRSSN  I wanted to add a linear trendline on my chart. Hoping I could re-create this   How do I customize also the my line chart? I wanted to have the other one filled as well. I am getting the one in below from splunk