All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have lookup file with 2 columns, Col1 and SPL_Qry. Each value in col1 will have associated Splunk query. In Dashboard, if I select ant value from the Drop Down, associated Query shoul... See more...
Hi All, I have lookup file with 2 columns, Col1 and SPL_Qry. Each value in col1 will have associated Splunk query. In Dashboard, if I select ant value from the Drop Down, associated Query should run and show me the result in Slunk Dashboard. Please advise Example: LookupFile.csv Column 1 SPL_Query value1 Qry_Related_to_Value1 value2 Qry_Related_to_Value2 value3 Qry_Related_to_Value3 value4 Qry_Related_to_Value4  
Hi guys! I want see the avg duration of activity of user on Splunk, but i didn't find the field of logout.   
Hello, I have a simple query that run on the last 10 days of month, around 300k events something like: index=myindex RESPCODE=0 |bin span=1mon _time |eval length=len(POS_ENTRY_CODE) |search POS_ENT... See more...
Hello, I have a simple query that run on the last 10 days of month, around 300k events something like: index=myindex RESPCODE=0 |bin span=1mon _time |eval length=len(POS_ENTRY_CODE) |search POS_ENTRY_CODE=8* AND length=2 |stats count The schedule return 6, which a little bit low than what I expected, so I hit Run to run the same query in ad-hoc, and it return 12, which is right. The time it took to run the schedule was 17 min and the time it took to run the query in adhoc was 19 min. My Splunk version is 7.2.6. So what wrong with my schedule alert and what should I do to mitigate this problem?  
I have things with IDs in my prebuilt panel, e.g.: <input type="link" id="submit_button6"> <choice value="submit">Gabriel</choice> </input>   And javascript and css loaded in my dashboard: <form ... See more...
I have things with IDs in my prebuilt panel, e.g.: <input type="link" id="submit_button6"> <choice value="submit">Gabriel</choice> </input>   And javascript and css loaded in my dashboard: <form version="1.1" script="GV-Utils:submit_button.js" stylesheet="GV-Utils:submit_button.css"> The panel works when it's in the simple XML of the dashboard, with the CSS applied and the JS triggering actions when the button is clicked as designed, but if I put the panel verbatim in a prebuilt panel and reference it in the dashboard the CSS is not applied and the JS doesn't happen. The reason I'd like that part of the dashboard in a prebuilt panel is so that users of my app (ES Choreographer | Splunkbase) can safely customise this part of the dashboard without touching the dashboard XML. If users touch the simple XML of the dashboard, it creates a "local" version of the whole dashboard which takes precedence over the "default" version and that for ever prevents this dashboard from being upgraded when the app is upgraded. Any ideas?
| makeresults count=730 | streamstats count | eval _time=_time-(count*86400) | timechart Count as Timestamp span=1mon | join type=left _time [| savedsearch XYZ | eval today = strftime(relative_time(n... See more...
| makeresults count=730 | streamstats count | eval _time=_time-(count*86400) | timechart Count as Timestamp span=1mon | join type=left _time [| savedsearch XYZ | eval today = strftime(relative_time(now(), "@d"), "%Y-%m-%d %H:%M:%S.%N") | where like (APP_NAME ,"%") and like (BS_ID,"%") and like (Function,"%") and like (DEPARTMENT_LONG_NAME,"%") and like (COUNTRY,"%") and like(EMPLOYEE_TYPE,"%") | eval _time = strptime(FROM_DATE, "%Y-%m-%d %H:%M:%S.%N") | timechart Count as Created span=1mon | streamstats sum(Created) as Createdcumulative] | join type=left _time [| savedsearch XYZ | where like (APP_NAME ,"%") and like (BS_ID,"%") and like (Function,"%") and like (DEPARTMENT_LONG_NAME,"%") and like (COUNTRY,"%") and like(EMPLOYEE_TYPE,"%") | eval _time = strptime(TO_DATE, "%Y-%m-%d %H:%M:%S.%N") | timechart Count as Deactivated span=1mon | streamstats sum(Deactivated) as Deactivatedcumulative] can anyone explain me the code please. am new and need this brief please.
We are looking for way to monitor commands/scripts  executed from Linux specific server  Is there any available  app  ?
Hello Team, i am getting "your connection to this site is not secure" on splunk web in one instance. i have installed certificate as well and similar kind of certificate working fine on other splun... See more...
Hello Team, i am getting "your connection to this site is not secure" on splunk web in one instance. i have installed certificate as well and similar kind of certificate working fine on other splunk  instance .  certificate installed path :  /opt/splunk/etc/auth please help to resolve this issue
I am a product manager working on a product. We are looking to build an integration with splunk cloud platform. We are open to even purchase the product to achieve this. I have raised multiple pri... See more...
I am a product manager working on a product. We are looking to build an integration with splunk cloud platform. We are open to even purchase the product to achieve this. I have raised multiple pricing quote requests through their website, but there is no response from their side.  please let me know if there are any other ways to do this?
Hi Recently i upgraded my splunk single instance from 7.2.2 to 8.1.0. splunkd keep on crashing everyday at specific time around 6PM. When i checked the latest crash log i got the below error. ... See more...
Hi Recently i upgraded my splunk single instance from 7.2.2 to 8.1.0. splunkd keep on crashing everyday at specific time around 6PM. When i checked the latest crash log i got the below error. Received fatal signal 11 (Segmentation fault). Cause: Signal sent by kernel. Crashing thread: archivereader Registers: RIP: [0x0000564C2B743B40] _ZN14CharacterClass9set_multiEPKcmb + 32 (splunkd + 0x21D4B40) . OS: Linux Arch: x86-64 Backtrace (PIC build): [0x0000564C2B743B40] _ZN14CharacterClass9set_multiEPKcmb + 32 (splunkd + 0x21D4B40) [0x0000564C2B2DFA45] _ZN27STDataInputHeaderProcessing21performPostProcessingEP11PipelineSetR12PipelineData + 69 (splunkd + 0x1D70A45) [0x0000564C2AB9BFEF] _ZN16ArchiveProcessor29performSTDataHeaderProcessingEv + 47 (splunkd + 0x162CFEF) [0x0000564C2AB9C23C] _ZN16ArchiveProcessor10writeEventEPKcm + 492 (splunkd + 0x162D23C) [0x0000564C2AB9E6CF] _ZN16ArchiveProcessor22awaitingClassificationEPKcm + 287 (splunkd + 0x162F6CF) [0x0000564C2AB9E741] _ZN16ArchiveProcessor5writeEPKvm + 65 (splunkd + 0x162F741) [0x0000564C2B14453C] _ZN14ArchiveContext7processERK8PathnameP13ISourceWriter + 940 (splunkd + 0x1BD553C) [0x0000564C2B144CA0] _ZN14ArchiveContext9readFullyEP13ISourceWriterRb + 1200 (splunkd + 0x1BD5CA0) [0x0000564C2ABA1141] _ZN16ArchiveProcessor14processArchiveER5CRC_tS1_ + 5489 (splunkd + 0x1632141) [0x0000564C2AA2ECC6] _ZN16ArchiveProcessor4mainEv + 614 (splunkd + 0x14BFCC6) [0x0000564C2B830627] _ZN6Thread8callMainEPv + 135 (splunkd + 0x22C1627) [0x00007F4C3AFB0EA5] ? (libpthread.so.0 + 0x7EA5) [0x00007F4C3ACD9B0D] clone + 109 (libc.so.6 + 0xFEB0D) Linux / security01.dca.int.untd.com / 4.20.5-1.el7.elrepo.x86_64 / #1 SMP Sat Jan 26 10:55:51 EST 2019 / x86_64 /etc/redhat-release: CentOS Linux release 7.9.2009 (Core) glibc version: 2.17 glibc release: stable . Last errno: 0 Threads running: 82 Runtime: 19747.282336s argv: [splunkd -p 8089 restart splunkd] Regex JIT enabled RE2 regex engine enabled using CLOCK_MONOTONIC Thread: "archivereader", did_join=0, ready_to_run=Y, main_thread=N, token=139964947887872 MutexByte: MutexByte-waiting={none} x86 CPUID registers: 0: 0000000D 756E6547 6C65746E 49656E69 1: 000306F2 02400800 FEFA3203 1FCBFBFF . 80000008: 0000302E 00000000 00000000 00000000 terminating... Here is the var/log/message: Jul 4 06:00:22 hostname kernel: [10196539.758876] traps: splunkd[7701] general protection fault ip:563a1f215b40 sp:7f21cc3f6070 error:0 in splunkd[563a1d041000+408d000] Can someone please provide a solution for this.
Hello Experts, I need help in resolving one of the issue that I am facing while trying to discard events that below to specific monitoring path. So here is the issue. Our requirement is such that w... See more...
Hello Experts, I need help in resolving one of the issue that I am facing while trying to discard events that below to specific monitoring path. So here is the issue. Our requirement is such that we have to group servers based on application. Now when we are grouping them based on app, the server for which some path is not required to be monitored is also getting ingested since I am unable to selectivley monitor path based on app for any host. For example, I have app--> app1 and app2 with servers app1h1, app1h2 and app2h1,app2h2 respectively. Path to be monitored for app1 with host app1h1 and app1h2 is /var/log Path to be monitored for app2 with host app2h1 and app2h2 is /applogs/portal Now the issue is since both of these paths are present in all of these hosts so when we mention these paths in input file, for host app1h1 and app1h2 which was supposed to be monitored for /var/log only, also start sending logs under /applogs/portal and same go for app2h1 and app2h2 which also starts sending logs for /var/log rather than just sending it for /applogs/portal. We just want to achieve specific path to be monitored for host that are required. I checked for filtering out based on blacklist by using regex but it didn't work under monitoring stanza. Tried to find pattern where I can corelate events based on host so that I can write some regex, but that didn't seemed to work (for this I am not sure if what I have done was correct). Any help or suggestion would be really helpful. Thank you.
Hi, I have list of domains in a lookup and I need to exclude it from my query   | tstats summariesonly=true allow_old_summaries=false dc("DNS.query") as count from datamodel=Network_Resolution w... See more...
Hi, I have list of domains in a lookup and I need to exclude it from my query   | tstats summariesonly=true allow_old_summaries=false dc("DNS.query") as count from datamodel=Network_Resolution where nodename=DNS "DNS.message_type"="QUERY" by "DNS.src","DNS.query" index sourcetype | rename "DNS.src" as src "DNS.query" as message index as orig_index sourcetype as orig_sourcetype | eval length=len(message) | stats sum(length) as length by src message orig_index orig_sourcetype | append [ tstats summariesonly=true allow_old_summaries=false dc("DNS.answer") as count from datamodel=Network_Resolution where nodename=DNS "DNS.message_type"="QUERY" by "DNS.src","DNS.answer" index sourcetype | rename "DNS.src" as src "DNS.answer" as message index as orig_index sourcetype as orig_sourcetype | eval message=if(message=="unknown","", message) | eval length=len(message) | stats sum(length) as length by src message orig_index orig_sourcetype] | dedup src | stats sum(length) as length by message src orig_index orig_sourcetype   Now I have to exclude the domains lookup from both my tstats.. I tried this but not seeing any results.. First part works fine but not the second one..    | tstats summariesonly=true allow_old_summaries=false dc("DNS.query") as count from datamodel=Network_Resolution where nodename=DNS "DNS.message_type"="QUERY" NOT [| inputlookup domainslist | fields domains | rename domains as DNS.query | format] by "DNS.src","DNS.query" index sourcetype | rename "DNS.src" as src "DNS.query" as message index as orig_index sourcetype as orig_sourcetype | eval length=len(message) | stats sum(length) as length by src message orig_index orig_sourcetype | append [ tstats summariesonly=true allow_old_summaries=false dc("DNS.answer") as count from datamodel=Network_Resolution where nodename=DNS "DNS.message_type"="QUERY" NOT [| inputlookup domainslist | fields domains | rename domains as DNS.answer | format] by "DNS.src","DNS.answer" index sourcetype | rename "DNS.src" as src "DNS.answer" as message index as orig_index sourcetype as orig_sourcetype | eval message=if(message=="unknown","", message) | eval length=len(message) | stats sum(length) as length by src message orig_index orig_sourcetype] | dedup src | stats sum(length) as length by message src orig_index orig_sourcetype   Any suggestions would be appreciated.. thanks!
Hi Experts, My alert was working fine with this cron schedule from last few months: 0 7 1 * *   But this month (July), first day of month was Saturday and the job linked with it doesn't run on we... See more...
Hi Experts, My alert was working fine with this cron schedule from last few months: 0 7 1 * *   But this month (July), first day of month was Saturday and the job linked with it doesn't run on weekend hence my monthly alert didn't trigger.    Can someone please help me to configure my alert to run at 7 AM on first  Monday of the month, please?  
Hi, I started working with Splunk a couple of months ago, and we are currently using Monitoring of Java Virtual Machines with the JMX app to monitor the health of JVM. Lately, I noted an issue where... See more...
Hi, I started working with Splunk a couple of months ago, and we are currently using Monitoring of Java Virtual Machines with the JMX app to monitor the health of JVM. Lately, I noted an issue where the app stopped working and getting data from JVM after the server CPU spiked to 100%. I tried to restart Splunk on the server and checking logs; I have the following error: 2023-07-04 08:18:46 ERROR Logger=jmx://XXXXX host=XXXXX, jmxServiceURL=, jmxport=XXX, jvmDescription=XXXX, processID=0,stanza=jmx://XXXXX,systemErrorMessage="Failed to retrieve RMIServer stub: javax.naming.CommunicationException [Root exception is java.rmi.UnmarshalException: error unmarshalling return; nested exception is: java.io.EOFException]" It was working before the CPU issue, and the server had no changes. I double-checked all configurations, and tried restarting Splunk and reinstalling the app, but it always returned the same error. Does anyone have a clue?
Hi all, We have zip files (password protected) dropped on an NFS share. We want to collect them automaticaly into Splunk SOAR, to push automated analysis on them. How do you manage to connect the ... See more...
Hi all, We have zip files (password protected) dropped on an NFS share. We want to collect them automaticaly into Splunk SOAR, to push automated analysis on them. How do you manage to connect the NFS share to SOAR, unzip it and add each new file in a vault/event? Cherry on the cake : delete the zip file from NFS ! (sorry if it seems to easy for some of you : I am new in splunk soar...) Thanks
I am trying to extract 2 fields from my logs.  Logs:   10.218.136.20 - - [30/Jun/2023:02:36:32 +0000] "GET /api/v2/runs/run-g1mhsXooK6aKV9bS?include=plan%2Ccost_estimate%2Capply%2Ccreated_by HT... See more...
I am trying to extract 2 fields from my logs.  Logs:   10.218.136.20 - - [30/Jun/2023:02:36:32 +0000] "GET /api/v2/runs/run-g1mhsXooK6aKV9bS?include=plan%2Ccost_estimate%2Capply%2Ccreated_by HTTP/1.1" 200 5460 "https://terraform.srv.companyname.com.au/app/customer/workspaces/a00ccc-tfe-test02-customer_infra_ping/runs/run-g1mhsXooK6aKV9bS" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36"   Here i want to extract 2 new fields  1. workspace_name="a00ccc-tfe-test02-customer_infra_ping" 2. workspace_id="g1mhsXooK6aKV9bS" Please help me with regex & thanks in advance!    
all fields duplicated which are coming in scripted input output. like below category message priority timestamp script output {"category": "disk space", "message": "'xxx' host '/nsr' disk... See more...
all fields duplicated which are coming in scripted input output. like below category message priority timestamp script output {"category": "disk space", "message": "'xxx' host '/nsr' disk path occupied with '92.42%' of disk space. Free up the space.", "priority": "warning", "timestamp": "2023-07-03T08:51:25+02:00"} timestamp is different field then _time. coming in outputs as shown above
Hi, We`ve got a dashboard sitting on a problematic SH and would like to clone and move it to another working SH. Is there a way to redirect the users to the newly created cloned dashboard ? Many... See more...
Hi, We`ve got a dashboard sitting on a problematic SH and would like to clone and move it to another working SH. Is there a way to redirect the users to the newly created cloned dashboard ? Many thanks, Toma
Hello Splunk Experts, We are using Splunk ODBC to extract data from Splunk and load data to Qliksense. It was working fine for couple of months and now it stopped working with the following error.  ... See more...
Hello Splunk Experts, We are using Splunk ODBC to extract data from Splunk and load data to Qliksense. It was working fine for couple of months and now it stopped working with the following error.   The field ReqID is alphanumeric and while loading data it is trying to convert into number (not sure why). This is failing the data load process.   
Hello Splunkers, I am using the official "Palo Alto Networks Add-on for Splunk" in order to ingest Palo logs inside my Splunk infra. My path is basically Panorama --> HF --> Indexers. I am wonderi... See more...
Hello Splunkers, I am using the official "Palo Alto Networks Add-on for Splunk" in order to ingest Palo logs inside my Splunk infra. My path is basically Panorama --> HF --> Indexers. I am wondering what will happen if my HF goes down during a certain amount of time ? Does the Panorama instance have a temporary outputs queue that will prevent data loss ? What could I do to make this flow of log more "resilient" ? Thanks a lot, GaetanVP  
Hello everyone! I'm using the Splunk OpenTelemetry collector to send logs from k8s to Splunk through HEC input. It's running as DaemonSet. The collector is deployed via Helm Chart: https://github.c... See more...
Hello everyone! I'm using the Splunk OpenTelemetry collector to send logs from k8s to Splunk through HEC input. It's running as DaemonSet. The collector is deployed via Helm Chart: https://github.com/signalfx/splunk-otel-collector-chart I would like to exclude logs with specific string, for example: "Connection reset by peer", but cannot find the configuration that would be able to do that. It looks like the processors can do that:  https://opentelemetry.io/docs/collector/configuration/#processors And also there is a default configuration for opentelemetry in the chart, but I cannot understand how to add filter to it: https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/templates/config/_otel-collector.tpl#L35 Has anyone encountered such issue or do you have any advices for this case?