All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I configured a new connection on db (V3.2.0) , i was able to see logs but recently when i checked i was not able to see logs?. Could someone help me out as i am fairly new to splunk db? or could you ... See more...
I configured a new connection on db (V3.2.0) , i was able to see logs but recently when i checked i was not able to see logs?. Could someone help me out as i am fairly new to splunk db? or could you give me a command to restart so that i can verify if logs are flowing
I have a host that is not allowed to connect to cloud due to security restrcitions, is there ability to run reports locally when I remote (Bomgar) log in.
Guys, I'm trying to set up Guardduty log collection, but so far I'm not successful. I have a heavy forwarder that forwards logs to the indexer and the search head queries the data in the indexe... See more...
Guys, I'm trying to set up Guardduty log collection, but so far I'm not successful. I have a heavy forwarder that forwards logs to the indexer and the search head queries the data in the indexer. I configured the add-ons on the necessary platforms (heavy forwarder and search head), I configured the accounts needed to forward the logs. The logs are being sent via the SQS queue. The logs arrive at the indexer, but I can't get the dashboard populated (Splunk App for AWS and aws_guardduty). On the dashboards, in the 'AccountID' menu, a message appears "Search produced no results." I also tried to configure via HEC, but there is an error message that it cannot connect to the platform. Can someone help me?
I have a use case where i need to pass the previously performed search query to replace the part of message with empty string. environment="dev" domain="test" logger_name="com.test.practice.demo.sse.... See more...
I have a use case where i need to pass the previously performed search query to replace the part of message with empty string. environment="dev" domain="test" logger_name="com.test.practice.demo.sse.impl.EventEncrypter" message="Data = "| eval message=replace(message," Data = ","") The above message in turn obtained must be used to do another operation. But the replace function itself is not working when i did a splunk search query. I am able to see the log with "Data =" being not removed and came as it is. I need to do this asap. can u pls provide a solution ?
I cannot for the life of me figure out how to invite users to the Splunk Cloud trial I've started. The documentation says "Go to Settings -> Access Controls" ... I can find no such option - all i... See more...
I cannot for the life of me figure out how to invite users to the Splunk Cloud trial I've started. The documentation says "Go to Settings -> Access Controls" ... I can find no such option - all i can see that's remotely related is "Users and Authentication -> Roles". I've also tried in the top right going to "My Splunk -> Invite Users". This just takes me my instances and tells me to "..create a custom role", etc, which I've also done. Am i missing something here?
Hi all I have a json file like this, { "NUM" : "#1", "TIME" : "1/27/2020 12:49:13", "STATUS" : "PASS", "DURATION" : "144", "COMPONENTS": [ { "NAME" : "a", "ACCEPTEDCHANGESETS": ... See more...
Hi all I have a json file like this, { "NUM" : "#1", "TIME" : "1/27/2020 12:49:13", "STATUS" : "PASS", "DURATION" : "144", "COMPONENTS": [ { "NAME" : "a", "ACCEPTEDCHANGESETS": [ "abc", "xyz", "efg", ], "NAME" : "a", "REJECTEDCHANGESETS": [ "pqr", ], "NAME" : "b", "ACCEPTEDCHANGESETS": [ "uop", "ghf", "yui" ] } ] } How can I find the count of "ACCEPTEDCHANGESETS" by "NAME"? Previously I had asked the same type of question and I tried referring that answer but I am not able to get a proper answer. Can anyone please help me with this!!
ダッシュボードをPDFでエクスポートしたところ、表示検知結果数は、20000件以上あるのに対して、 PDFでは1000件までの表示しかされませんでした。 これは仕様でしょうか。
I have 23 KPIs created for a service and all KPIs are green or normal but the overall service health is coming as N/A. Whats the reason of this, please help
Hello everyone, I tried using 'UseACK' with batch monitoring. It seems that it is okay to use when the system is usually experiencing network issues. However, once the system reboots(hard), the da... See more...
Hello everyone, I tried using 'UseACK' with batch monitoring. It seems that it is okay to use when the system is usually experiencing network issues. However, once the system reboots(hard), the data stored in-memory is lost. I read that persistentQueue stores the data in disk instead of memory but the said option does not support batch monitoring. Is there a way to retain the data even after a reboot using batch monitoring? Thank you. Regards, Raj
hi I use the code below in order to display a pie chart As you can see in the screenshot, I have 2 sections (in yellow color) What I need is that by clicking on a section, I display the relate... See more...
hi I use the code below in order to display a pie chart As you can see in the screenshot, I have 2 sections (in yellow color) What I need is that by clicking on a section, I display the related datas in a drilldown You can see below, my drilldown XML but anything happen..... `CPU` | fields process_cpu_used_percent host process_name | where process_cpu_used_percent>80 | dedup host process_name | lookup TUTU.csv HOSTNAME as host output SITE | search SITE=$tok_filtersite|s$ | eval process_name=case(process_name like "mfev%" OR process_name like "mcdatrep" OR process_name=="mcshield" OR process_name=="amupdate" OR process_name=="McScript_InUse" OR process_name=="macompatsvc" OR process_name=="FrameworkService" OR process_name=="McScanCheck", "McAFEE", process_name like "Wmi%", "WMI", process_name=="conhost", "CMD Windows console", process_name=="csrss" OR process_name=="System" OR process_name=="TiWorker" OR process_name=="msfeedssync" OR process_name=="msiexec" OR process_name=="rundll32" OR process_name=="services" OR process_name like "svchost%" OR process_name=="OneDriveSetup" OR process_name=="poqexec" OR process_name=="unsecapp" OR process_name=="TabTip" OR process_name=="Memory_Compression" OR process_name=="SetupHost" OR process_name=="WerFault" OR process_name=="explorer" OR process_name=="mscorsvw" OR process_name=="sppsvc" OR process_name=="ngen" OR process_name=="spoolsv" OR process_name=="SrTasks" OR process_name=="policyHost" OR process_name=="dwm" OR process_name=="perf-test-9c" OR process_name like "SearchProtocolHost%" OR process_name like "RuntimeBroker%" OR process_name like "LogonUI%", "Windows native process") | search host=$tok_filterhost$ | stats count(host) as Total by process_name | sort -Total limit=10 DRILLDOWN XML <dashboard> <label>TEST PROCESS</label> <row> <panel> <chart> <search> <query> `CPU` | fields process_cpu_used_percent host process_name | where process_cpu_used_percent>80 | dedup host process_name | eval PROCESS=case(process_name like "mfev%" OR process_name like "mcdatrep" OR process_name=="mcshield" OR process_name=="amupdate" OR process_name=="McScript_InUse" OR process_name=="macompatsvc" OR process_name=="FrameworkService" OR process_name=="McScanCheck", "McAFEE", process_name like "Wmi%", "WMI", process_name=="conhost", "CMD Windows console", process_name=="csrss" OR process_name=="System" OR process_name=="TiWorker" OR process_name=="msfeedssync" OR process_name=="msiexec" OR process_name=="rundll32" OR process_name=="services" OR process_name like "svchost%" OR process_name=="OneDriveSetup" OR process_name=="poqexec" OR process_name=="unsecapp" OR process_name=="TabTip" OR process_name=="Memory_Compression" OR process_name=="SetupHost" OR process_name=="WerFault" OR process_name=="explorer" OR process_name=="mscorsvw" OR process_name=="sppsvc" OR process_name=="ngen" OR process_name=="spoolsv" OR process_name=="SrTasks" OR process_name=="policyHost" OR process_name=="dwm" OR process_name=="perf-test-9c" OR process_name like "SearchProtocolHost%" OR process_name like "RuntimeBroker%" OR process_name like "LogonUI%", "Windows native process") | stats dc(eval(if(process_cpu_used_percent > 50,host,NULL))) as Total by PROCESS | where like(PROCESS,"McAFEE") OR like(PROCESS,"Windows native process") | stats dc(eval(if(process_cpu_used_percent > 50,host,NULL))) as Total by process_name | sort 0 - Total</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </dashboard> What is the problem please??
Hi , I have a statistics table in which each column contains different value for eg: Application Name Application ID Functional Fitness Digital Fitness A ... See more...
Hi , I have a statistics table in which each column contains different value for eg: Application Name Application ID Functional Fitness Digital Fitness A 123 0.2 0.5 B 456 3 5 C 789 1 1.5 So now i want to change the color of the value (not the background) present in functional fitness and digital fitness based on the range, if the range is from 0 to 0.5 -- green colour, 0.6 to 3 --- red colour, 3.5 to 5 --- yellow colour So can you please help me in changing the colour of the value present in each column based on the range.
Hi , I have a statistics table in which each column contains different value for eg: Application Name Application ID Functional Fitness Digital Fitness A ... See more...
Hi , I have a statistics table in which each column contains different value for eg: Application Name Application ID Functional Fitness Digital Fitness A 123 0.2 0.5 B 456 3 5 C 789 1 1.5 So now i want to change the color of the value present in functional fitness and digital fitness based on the range, if the range is from 0 to 0.5 -- green colour, 0.6 to 3 --- red colour, 3.5 to 5 --- yellow colour So can you please help me in changing the colour of the value present in each column based on the range.
Hi folks, is it possible to do a "folder-lock" mechanism for a splunk app ? Basic requirement is, i dont want my splunk admins to see the contents of /opt/splunk/etc/secretapp possible to ac... See more...
Hi folks, is it possible to do a "folder-lock" mechanism for a splunk app ? Basic requirement is, i dont want my splunk admins to see the contents of /opt/splunk/etc/secretapp possible to achieve via Splunk ? is there any app/folder level encryption can be done ?
For an alert that los to slack, i have this config: to show the errors from the log in the channel Message: $result.log$ I am missing the stacktrace, it seems that either only the first line is ... See more...
For an alert that los to slack, i have this config: to show the errors from the log in the channel Message: $result.log$ I am missing the stacktrace, it seems that either only the first line is shown / message part of the error without the stacktrace what could be the cause or resolution for this? Thanks
Hi all, Need in some issue which is might be a known one, please help. every frequently one or the other indexer(splunkd) service is going in "defunct" state then when i try to restart the service... See more...
Hi all, Need in some issue which is might be a known one, please help. every frequently one or the other indexer(splunkd) service is going in "defunct" state then when i try to restart the service all PID's related to splunk going in state so cannot restart the service aswell. Initially, it was showing below "defunct" then i tried to restart splunk.service [root@hostname ~]# ps -ef | grep defunct root 23399 23167 0 21:49 pts/1 00:00:00 grep --color=auto defunct svc.spl+ 32079 4383 0 09:07 ? 00:00:03 [splunkd] [root@hostname ~]# I tried to restart and the status is below. [root@hostname ~]# systemctl status splunk.service ● splunk.service - Splunk Loaded: loaded (/usr/lib/systemd/system/splunk.service; enabled; vendor preset: disabled) Active: activating (start) since Sun 2020-02-02 22:45:54 MST; 1min 9s ago Process: 29434 ExecStop=/opt/splunk/bin/splunk stop (code=killed, signal=TERM) Main PID: 4381; : 32121 (splunk) CGroup: /system.slice/splunk.service └─32121 /opt/splunk/bin/splunk start --answer-yes --no-prompt --accept-license ‣ 4381 [splunkd] Feb 02 22:45:54 hostnme.domain.com systemd[1]: Starting Splunk... Feb 02 22:45:54 hostnme.domain.com splunk[32121]: splunkd 4381 was not running. Feb 02 22:45:54 hostnme.domain.com splunk[32121]: Stopping splunk helpers... [root@hostname ~]# ps -ef | grep splunk svc.spl+ 4381 1 99 Jan30 ? 5-12:21:04 [splunkd] svc.spl+ 7638 1 0 09:05 ? 00:00:37 [splunkd] svc.spl+ 7639 7638 0 09:05 ? 00:00:00 [splunkd] svc.spl+ 26167 1 0 09:06 ? 00:00:14 [splunkd] svc.spl+ 26168 26167 0 09:06 ? 00:00:00 [splunkd] svc.spl+ 32079 1 0 09:07 ? 00:00:03 [splunkd] svc.spl+ 32080 32079 0 09:07 ? 00:00:00 [splunkd] svc.spl+ 32121 1 0 22:45 ? 00:00:00 /opt/splunk/bin/splunk start --answer-yes --no-prompt --accept-license root 32281 23128 0 22:47 pts/1 00:00:00 grep --color=auto splunk svc.spl+ 36590 1 0 09:08 ? 00:00:03 [splunkd] svc.spl+ 36597 36590 0 09:08 ? 00:00:00 [splunkd] svc.spl+ 36744 1 0 09:08 ? 00:00:08 [splunkd] svc.spl+ 36746 36744 0 09:08 ? 00:00:00 [splunkd] svc.spl+ 45222 1 0 09:10 ? 00:00:01 [splunkd] svc.spl+ 45224 45222 0 09:10 ? 00:00:00 [splunkd] any help would be appreciated. Thankyou.
Which of the following (in terms of the REGEX) is the most efficient? I've seen examples of all of them. And is there an even more efficient REGEX? [send_to_nullQueue_dot] SOURCE_KEY = _raw ... See more...
Which of the following (in terms of the REGEX) is the most efficient? I've seen examples of all of them. And is there an even more efficient REGEX? [send_to_nullQueue_dot] SOURCE_KEY = _raw REGEX = . DEST_KEY = queue FORMAT = nullQueue [send_to_nullQueue_dot_star] SOURCE_KEY = _raw REGEX = .* DEST_KEY = queue FORMAT = nullQueue [send_to_nullQueue_dot_plus] SOURCE_KEY = _raw REGEX = .+ DEST_KEY = queue FORMAT = nullQueue [send_to_nullQueue_dot_star_lazy] SOURCE_KEY = _raw REGEX = .*? DEST_KEY = queue FORMAT = nullQueue [send_to_nullQueue_captured_dot_star] SOURCE_KEY = _raw REGEX = (.*) DEST_KEY = queue FORMAT = nullQueue
Using the universal forwarder I need to monitor multiple directories in separate parts of the filesystem. Specifically (obfuscated so as not to identify our customer): [monitor:///var/log] [mo... See more...
Using the universal forwarder I need to monitor multiple directories in separate parts of the filesystem. Specifically (obfuscated so as not to identify our customer): [monitor:///var/log] [monitor:///home//logs] It seems that multiple monitor stanzas are not working (at least our customer is reporting that the second monitor stanza is not forwarding any files to their Splunk instance. Is there a workable solution?
Hi. I'm new to splunk and trying to code a search for top 30 applications by bandwidth. So far I have the following coding and wondering if anyone has any ideas on how I can get it to work. I have pu... See more...
Hi. I'm new to splunk and trying to code a search for top 30 applications by bandwidth. So far I have the following coding and wondering if anyone has any ideas on how I can get it to work. I have put an '*' in my index as it's classified. I would like it in a table. index=* sourcetype=*=* OR * | eval byteReceivedMB=round(rcvdbyte/1024/1024,2) | eval byteSentMB=round(sentbyte/1024/1024,2) | stats sum(byteReceivedMB) as "Megabytes Received" sum(byteSentMB) as "Megabytes Sent" by app | addtotals | dedup app | sort limit=30 -Total
Using Splunk 7.3.3, after I initiated a rolling restart from the cluster master (multi-site indexer cluster), the first indexer began to restart. Then it showed batch adding, then the Indexer Clust... See more...
Using Splunk 7.3.3, after I initiated a rolling restart from the cluster master (multi-site indexer cluster), the first indexer began to restart. Then it showed batch adding, then the Indexer Clustering: Master Node page, showed that the indexer failed to restart [Mon Feb 2 12:47:52 2020] Failed to restart peer=<GUID> peer_name=<hostname>. Moving to failed peer group and continuing. [Mon Feb 2 12:47:52 2020] Failing peer=<GUID> peer_name=<hostname> timed out while trying to restart. I did a ping from the CM to this indexer and it returned fine. Connectivity was not an issue before the rolling restart and network connectivity appears to be working fine. Is there a timeout window or setting I can adjust to better accommodate network latency and give the CM more time to reach the peer? What does this mean for my rolling restart, will remaining peers be restarted but I should restart this one manually? How can I list this "failed peer group" to see all systems that may fail to restart?
Hi, I have two fields with different values and I want count on both basis. These are events and hosts occured in log files. Example - I have fields like host and event. Host values are l... See more...
Hi, I have two fields with different values and I want count on both basis. These are events and hosts occured in log files. Example - I have fields like host and event. Host values are like A , B, C,D. Events are like reboot , running, shutdown. And the results should show like - _time A:reboot A:shutdown B:reboot C:running 00:01 1 2 1 4 00:02 2 4 3 1 00:03 0 1 4 5 00:04 1 2 0 1