All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  index="va_tools_oit-salesforce" source="sfdc_event_log://EventLog_va_my_salesforce_com_eventlog_va" sourcetype="sfdc:logfile" URL=""https://case.mibams.ba.com/api/ms*| rex field=_raw "\"\"https:/... See more...
  index="va_tools_oit-salesforce" source="sfdc_event_log://EventLog_va_my_salesforce_com_eventlog_va" sourcetype="sfdc:logfile" URL=""https://case.mibams.ba.com/api/ms*| rex field=_raw "\"\"https://case.mibams.ba.com/api/(?P<API>.*)\"\"\sTIME.*" | dedup API | table API       I have following cmd helps me rex information after api  in url for 2 /"/" Example,  ms_reports/retrieve.    I need the api to end with / or ?.    ms_odata_svc/temp_rcvbl_grids?%24top=201&%24filter=CALL_ID%20eq%20%27VBABOIMcbriJ-40574153%27&%24orderby=RCVBL_TYPE_NM%2CDSCVRY_DT%20desc%2CRCVBL_ID%20desc%2CRCVBL_TRAN_NBR&%24count=false&%24select=AWARD_BENE_TYPE_NM%2CAWARD_TYPE_NM%2CBAL_AMT%2CBENE_NM%2CDSCVRY_DT%2CFACILITY_CD%2CFILE_NBR%2CHOME_LOAN_RFRNC_NBR%2CINT_BAL_AMT%2CLEDGER_ACNT_TYPE_NM%2CLOAN_ID_NBR%2CLOAN_ORGTN_DT%2CORIGNL_AMT%2CRCVBL_ID%2CRCVBL_TRAN_NBR%2CRCVBL_TYPE_NM%2CREPYMT_AMT%2CSSN_NBR%2CTAX_ID_NBR
Hi, I am trying to configure a drilldown for a dashboard selecting the option "Link to custom URL" from the UI. The link is basically a splunk search, idea being when a user clicks on the bar chart... See more...
Hi, I am trying to configure a drilldown for a dashboard selecting the option "Link to custom URL" from the UI. The link is basically a splunk search, idea being when a user clicks on the bar chart, a corresponding search page opens up based on the row where the user clicked. So the name of the row is dynamic/clickable. To pass the row name to the search query I have tried multiple in-built tokens, $click/name/row.value/value2$ and more. When I click on a row this token is not processing. It stays click.value2 or whatever is used. example search query: index=abcd sourecetype="stype" field1=$click.value$ On the Barchart x-axis is field1, when I hard code a value into the search query it works great. When using token I get no results. Any suggestions what I am doing wrong or can try differently. Thanks in advance !!
(Asking on behalf of someone else so doing my best to parlay the question) It looks like Android supports W3C Trace Context propagation via its OkHttp and Volley wrappers, which is great. Unfortunat... See more...
(Asking on behalf of someone else so doing my best to parlay the question) It looks like Android supports W3C Trace Context propagation via its OkHttp and Volley wrappers, which is great. Unfortunately, we don't see this same support in the iOS SDK. We're relying on the traceparent header for sampling decision propagation, so this lack of support means API requests coming from our iOS client often go untraced in APM. Is this a known issue or something that we've missed as a configuration option? We've explored the iOS SDK repository thoroughly but weren't able to figure it out. To clarify, both Android and iOS SDK successfully read the Server-Timing header from an API call's response and associate it with the client-side spans. The issue is that iOS doesn't include the traceparent header on the request, which is what tells the server to trace the operation in the first place. We would expect that to work since it's core to how OpenTelemetry is supposed to work and does work on the web and Android. Standardized context propagation via W3C Trace Context, which includes the sampling decision, is one of the main advantages of using OpenTelemetry. We would need to update all of our backend services with custom logic to understand a novel propagation system, which is a non-starter.
I have two lookups: one is the scan results from the current week and the other is historical lookup of scan results from the weeks prior. Each event is the scan results for a host (fields DNS IP). I... See more...
I have two lookups: one is the scan results from the current week and the other is historical lookup of scan results from the weeks prior. Each event is the scan results for a host (fields DNS IP). I have a field called called Host_Auth that can have of the following field values: Windows Successful Windows Failure Unix Failed Unix Successful Unix Timeout Windows Not Attempted  Unix Not Attempted  Unknown I would like to create a search where the first lookup is compered against the second lookup and return events where the Host_Auth field value is is different.  I tried a join type=left Host_Auth but that didn't quite work 
Hello, I am new to Splunk.  Please could someone advise on how I can track devices that still uses SMBv1 and SMBv1 in our Network using Splunk. 
Hi, I was wondering how we could download  the specific notables  into csv or text format from incident review panel in Splunk Enterprise  Security.
Hello All, Let me apologize if this has been clarified before though I cannot seem to find any documentation. I have a file that is updated ever 60 seconds with the current status of vpn users co... See more...
Hello All, Let me apologize if this has been clarified before though I cannot seem to find any documentation. I have a file that is updated ever 60 seconds with the current status of vpn users connections including data transfer bytes, username, and for the important part... Initial connection time. Unfortunately when splunk indexes the modification of the lines in this file it uses the users initial connection time as the _time field. I would like to ask if anyone knows a way using the inputs.conf file to set the _time value to the current unix timestamp. Ideally I would expect something along the lines of.... [monitor:///var/log/openvpn*.log] index=os disabled = 0 sourcetype = openvpn interval = 10 _time =now() Thank you all for any help you can provide; SPraus
I am trying to find a way to produce a column in a table to show the difference between the recieved_time and the remediation_time. Currently the Diff_in_Time field and TotalDiff_in_Time fields retur... See more...
I am trying to find a way to produce a column in a table to show the difference between the recieved_time and the remediation_time. Currently the Diff_in_Time field and TotalDiff_in_Time fields return empty. Any   index=someapplication sourcetype=some_log subject="*" "folder_locations{}"="*" from_address="*" remediation_timestamp="*" received_time="*" recipient_address="*" to_addresses="*" | eval rectime=received_time | eval remtime=remediation_timestamp | eval Diff_in_Time=strptime(rectime, "%Y-%m-%d %H:%M:%S.%3N")-strptime(remtime, "%Y-%m-%d %H:%M:%S.%3N") | eventstats sum(Diff_in_Time) as TotalDiff_in_Time | table rectime remtime Diff_in_Time TotalDiff_in_Time
Hello, My architecture is the following: cluster of indexers (3 indexers), 1 Master , 1 Search Head. I removed by mistake the SErach head instance form the cluster and I re add it and now I am re... See more...
Hello, My architecture is the following: cluster of indexers (3 indexers), 1 Master , 1 Search Head. I removed by mistake the SErach head instance form the cluster and I re add it and now I am receiving this error: Failed to start KV Store process. See mongod.log and splunkd.log for details. KV Store changed status to failed. KVStore process terminated. KV Store process terminated abnormally (exit code 6, status PID 3339006 killed by signal 6: Aborted). See mongod.log and splunkd.log for details.   mongod.log : 2023-04-27T14:41:33.317Z I CONTROL [initandlisten] MongoDB starting : pid=3339006 port=8191 dbpath=/opt/splunk/var/lib/splunk/kvstore/mongo 64-bit host=sv-na-splhead 2023-04-27T14:41:33.317Z I CONTROL [initandlisten] db version v3.6.17-linux-splunk-v4 2023-04-27T14:41:33.317Z I CONTROL [initandlisten] git version: 226949cc252af265483afbf859b446590b09b098 2023-04-27T14:41:33.317Z I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2zf-fips 21 Jun 2022 2023-04-27T14:41:33.317Z I CONTROL [initandlisten] allocator: tcmalloc 2023-04-27T14:41:33.317Z I CONTROL [initandlisten] modules: none 2023-04-27T14:41:33.317Z I CONTROL [initandlisten] build environment: 2023-04-27T14:41:33.317Z I CONTROL [initandlisten] distarch: x86_64 2023-04-27T14:41:33.317Z I CONTROL [initandlisten] target_arch: x86_64 2023-04-27T14:41:33.317Z I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, ssl: { PEMKeyFile: "/opt/splunk/etc/auth/server.pem", PEMKeyPassword: "<password>", allowInvalidHostnames: true, disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireSSL", sslCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." }, unixDomainSocket: { enabled: false } }, replication: { oplogSizeMB: 200, replSet: "4F365B7A-622E-4121-82B0-CF2933199661" }, security: { javascriptEnabled: false, keyFile: "/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0", oplogFetcherSteadyStateMaxFetcherRestarts: "0" }, storage: { dbPath: "/opt/splunk/var/lib/splunk/kvstore/mongo", engine: "mmapv1", mmapv1: { smallFiles: true } }, systemLog: { timeStampFormat: "iso8601-utc" } } 2023-04-27T14:41:33.320Z I STORAGE [initandlisten] exception in initAndListen: Location28662: Cannot start server. Detected data files in /opt/splunk/var/lib/splunk/kvstore/mongo created by the 'wiredTiger' storage engine, but the specified storage engine was 'mmapv1'., terminating 2023-04-27T14:41:33.320Z F - [initandlisten] Invariant failure globalStorageEngine src/mongo/db/service_context_d.cpp 272 2023-04-27T14:41:33.320Z F - [initandlisten] ***aborting after invariant() failure 2023-04-27T14:41:33.331Z F - [initandlisten] Got signal: 6 (Aborted). 0x55fed413bde1 0x55fed413aff9 0x55fed413b4dd 0x7fb5cae3d140 0x7fb5cac8dce1 0x7fb5cac77537 0x55fed27f5330 0x55fed2aa2d18 0x55fed3fe07f1 0x55fed3fdc8d7 0x55fed286c93c 0x55fed4137195 0x55fed27f6513 0x55fed28762f0 0x55fed287431a 0x55fed27f7159 0x7fb5cac78d0a 0x55fed285b665 ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"55FED1E7B000","o":"22C0DE1","s":"_ZN5mongo15printStackTraceERSo"},{"b":"55FED1E7B000","o":"22BFFF9"},{"b":"55FED1E7B000","o":"22C04DD"},{"b":"7FB5CAE2A000","o":"13140"},{"b":"7FB5CAC55000","o":"38CE1","s":"gsignal"},{"b":"7FB5CAC55000","o":"22537","s":"abort"},{"b":"55FED1E7B000","o":"97A330","s":"_ZN5mongo22invariantFailedWithMsgEPKcS1_S1_j"},{"b":"55FED1E7B000","o":"C27D18","s":"_ZN5mongo20ServiceContextMongoD9_newOpCtxEPNS_6ClientEj"},{"b":"55FED1E7B000","o":"21657F1","s":"_ZN5mongo14ServiceContext20makeOperationContextEPNS_6ClientE"},{"b":"55FED1E7B000","o":"21618D7","s":"_ZN5mongo6Client20makeOperationContextEv"},{"b":"55FED1E7B000","o":"9F193C"},{"b":"55FED1E7B000","o":"22BC195"},{"b":"55FED1E7B000","o":"97B513","s":"_ZN5mongo8shutdownENS_8ExitCodeERKNS_16ShutdownTaskArgsE"},{"b":"55FED1E7B000","o":"9FB2F0","s":"_ZN5mongo10StatusWithISt6vectorISt4pairINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_ESaIS9_EEEC1ERKSC_"},{"b":"55FED1E7B000","o":"9F931A","s":"_ZN5mongo11mongoDbMainEiPPcS1_"},{"b":"55FED1E7B000","o":"97C159","s":"main"},{"b":"7FB5CAC55000","o":"23D0A","s":"__libc_start_main"},{"b":"55FED1E7B000","o":"9E0665"}],"processInfo":{ "mongodbVersion" : "3.6.17-linux-splunk-v4", "gitVersion" : "226949cc252af265483afbf859b446590b09b098", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "5.10.0-18-amd64", "version" : "#1 SMP Debian 5.10.140-1 (2022-09-02)", "machine" : "x86_64" }, "somap" : [ { "b" : "55FED1E7B000", "elfType" : 3 }, { "b" : "7FFF14ABA000", "path" : "linux-vdso.so.1", "elfType" : 3 }, { "b" : "7FB5CB31C000", "path" : "/opt/splunk/lib/libdlwrapper.so", "elfType" : 3 }, { "b" : "7FB5CB2FE000", "path" : "/lib/x86_64-linux-gnu/libresolv.so.2", "elfType" : 3 }, { "b" : "7FB5CB019000", "path" : "/opt/splunk/lib/libcrypto.so.1.0.0", "elfType" : 3 }, { "b" : "7FB5CAFA2000", "path" : "/opt/splunk/lib/libssl.so.1.0.0", "elfType" : 3 }, { "b" : "7FB5CAF9C000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3 }, { "b" : "7FB5CAF92000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3 }, { "b" : "7FB5CAE4C000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3 }, { "b" : "7FB5CAE2A000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3 }, { "b" : "7FB5CAC55000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3 }, { "b" : "7FB5CB321000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3 }, { "b" : "7FB5CAC38000", "path" : "/opt/splunk/lib/libz.so.1", "elfType" : 3 } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x55fed413bde1] mongod(+0x22BFFF9) [0x55fed413aff9] mongod(+0x22C04DD) [0x55fed413b4dd] libpthread.so.0(+0x13140) [0x7fb5cae3d140] libc.so.6(gsignal+0x141) [0x7fb5cac8dce1] libc.so.6(abort+0x123) [0x7fb5cac77537] mongod(_ZN5mongo22invariantFailedWithMsgEPKcS1_S1_j+0x0) [0x55fed27f5330] mongod(_ZN5mongo20ServiceContextMongoD9_newOpCtxEPNS_6ClientEj+0x158) [0x55fed2aa2d18] mongod(_ZN5mongo14ServiceContext20makeOperationContextEPNS_6ClientE+0x41) [0x55fed3fe07f1] mongod(_ZN5mongo6Client20makeOperationContextEv+0x27) [0x55fed3fdc8d7] mongod(+0x9F193C) [0x55fed286c93c] mongod(+0x22BC195) [0x55fed4137195] mongod(_ZN5mongo8shutdownENS_8ExitCodeERKNS_16ShutdownTaskArgsE+0x36E) [0x55fed27f6513] mongod(_ZN5mongo10StatusWithISt6vectorISt4pairINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_ESaIS9_EEEC1ERKSC_+0x0) [0x55fed28762f0] mongod(_ZN5mongo11mongoDbMainEiPPcS1_+0x87A) [0x55fed287431a] mongod(main+0x9) [0x55fed27f7159] libc.so.6(__libc_start_main+0xEA) [0x7fb5cac78d0a] mongod(+0x9E0665) [0x55fed285b665] ----- END BACKTRACE ----   Can I have any advise on how to resolve it ?   Thank you!      
Dear Team , Can you please provide the steps how to get AppDynamics alerts through sms in INDIA region . Thanks & Regards Nihar
Hi All, I have a web browser based application which is supported only in IE & EDGE , need synthetic monitoring to be configured in Appdynamics. Do we have recorder for IE, like Selenium IDE / Kata... See more...
Hi All, I have a web browser based application which is supported only in IE & EDGE , need synthetic monitoring to be configured in Appdynamics. Do we have recorder for IE, like Selenium IDE / Katalon recorder? Because couldn't locate XPATH/CCS Selector in IE. Looks like EDGE is not supported in AppDynamics? As we are using PSA, will it be supported? Thanks.
Hi There,    Im having several fields and multiple values for same src_name  and email,   I need latest date in check_date and its associated values for that respective src_name, these things are t... See more...
Hi There,    Im having several fields and multiple values for same src_name  and email,   I need latest date in check_date and its associated values for that respective src_name, these things are there in lookup, so I need those data of latest in check_date for that specific src_name,
I have the field called Error and if there is error we get  error message if there is no error it will be empty eg: Value for Error is E00000 duplicate key error i have tried as below to add status... See more...
I have the field called Error and if there is error we get  error message if there is no error it will be empty eg: Value for Error is E00000 duplicate key error i have tried as below to add status if error thrown it is Failure else Success   | eval Status=case(len(Error)=='', "Success",len(Error)>0, "Failure")   It doesnt print Success  in Status where there is not error (Error is empty) Message i tried below:   | eval Status=case(isnull(Error), "Success",isnotnull(Error), "Failure")   it prints Success and Failure for Failure state seems isnull satisfied for both the conditions. Please advise.
I want to make an evenly spaced x-axis in a dataset with gaps in it and then use chart to make a trellis view based on the variable "testsubject" I use |makecontinuous to pad a data set with empty x... See more...
I want to make an evenly spaced x-axis in a dataset with gaps in it and then use chart to make a trellis view based on the variable "testsubject" I use |makecontinuous to pad a data set with empty x-values to get the distance between the existing data points right in a chart. The search looks like this: | tstats avg(ReadCVM1) as ReadCVM1 avg(ReadCVM2) as ReadCVM2 avg(ReadCVM3) as ReadCVM3 avg(ReadCVM4) as ReadCVM4 avg(ReadCVM5) as ReadCVM5 avg(ReadCVM6) as ReadCVM6 avg(ReadCVM7) as ReadCVM7 avg(ReadCVM8) as ReadCVM8 avg(ReadCVM9) as ReadCVM9 avg(ReadCVM10) as ReadCVM10 avg(ReadStackPot) as ReadStackPot avg(ReadCoolTempOut) as ReadCoolTempOut latest(RunPolCurve) as RunPolCurve where index=test_station_log_data AND (testsubject IN (P1211,P1213)) by ReadCurrent testsubject |where RunPolCurve=1 |eval RoundCurrent = round(ReadCurrent) |sort testsubject ReadCurrent |eval RoundCurrent = round(ReadCurrent) |makecontinuous RoundCurrent span=1 |filldown testsubject |chart avg(ReadCVM1) as ReadCVM1 avg(ReadCVM2) as ReadCVM2 avg(ReadCVM3) as ReadCVM3 avg(ReadCVM4) as ReadCVM4 avg(ReadCVM5) as ReadCVM5 avg(ReadCVM6) as ReadCVM6 avg(ReadCVM7) as ReadCVM7 avg(ReadCVM8) as ReadCVM8 avg(ReadCVM9) as ReadCVM9 avg(ReadCVM10) as ReadCVM10 by RoundCurrent testsubject If I remove the chart command, gaps in RoundCurrent has been filled like I want. See below: After I run the |chart command, the padded regions have been removed again.Can I prevent this from happening?   I found out that I can get them back by running |makecontinuous after the |chart command, but then I loose the ability to make a trellis view split by "testsubject". I need this to present it properly in a dashboard. Any help would be greatly appreciated.  
Hi all,   I created a correlation search in SPlunk ES and added a Notable Event in the Adaptative Response Actions. I'd like to include in the notable some information from the correlation sear... See more...
Hi all,   I created a correlation search in SPlunk ES and added a Notable Event in the Adaptative Response Actions. I'd like to include in the notable some information from the correlation search results, such as : orig_host, ip, and some other custom fields. I went to Incident Review Settings in order to add my custome fields in the Event Attributes I customized my correlation search query in order to name the returned fieds that are named according to the Event Attributes that already exist + the custom ones that I just created In the correlation search, into the "Notable" sub-menu, I added the fields I'd like to enrich my Notable with to Identity Extraction and Asset Extraction I then added a fiew variables in the title of the created Notable. Something like "the user $custom_field_1$ just connected to the account of the user $custom_field_2$, from the computer $orig_host$ that belongs to $orig_host_owner$. custom_field_1 and $custom_field_2 variables work and return the right values. orig_host, orig_host_owner don't and return the strings $orig_host$ and $orig_host_owner$. I'm a bit confused. Does anybody have had this before ?   Thanks for your kind help !
  I have spent a LOT of time searching for a way to do this. I have saved searches within Splunk Enterprise 9.x (the cloud instance) and want to be able to grab these CSV's to a windows directory t... See more...
  I have spent a LOT of time searching for a way to do this. I have saved searches within Splunk Enterprise 9.x (the cloud instance) and want to be able to grab these CSV's to a windows directory to then import into a 3rd party toolset. There are a LOT of google results and massively outdated Splunk community posts which just clouds the issue.   A colleague has used a variant of the below script, they used a different bypass for the certificate as it was written back when using PowerShell v5, whereas the customer I am working with has PowerShell v7 so the -SkipCertificateCheck switch is supported.   When I run the below I get a timeout: A connection attempt failed because the connected party did not properly respond after a period of time, or      | established connection failed because connected host has failed to respond.   Would appreciate if anyone has an existing PowerShell script that is known working to obtain the results of the saved searches and to outpit them to a nominated Windows directory.   Many thanks!         $requestUri = "https://{customer}.splunkcloud.com:8089/services/search/v2/jobs/export" $accessToken = "{token removed}" $outFile = "C:\DataPlatform\SplunkExports\GS_NETWORK_ADAPTER_CONFIGUR.csv" $headers = @{ Authorization = "Bearer $accessToken" } $params = @{ search = "savedsearch mc_LCM_NETWORK_ADAPTER" output_mode = "csv" } Invoke-WebRequest -SkipCertificateCheck -Header $headers -uri $requestUri -Body $params -ContentType "application/x-www-form-urlencoded" -OutFile $outFile        
Hello, I'm trying to parse URLs in Java logs (*.trace), it works for complete URL with this following request : index=os_win_wks sourcetype="os_win_wks:java:trace" | rex field=_raw (?<URL>https?... See more...
Hello, I'm trying to parse URLs in Java logs (*.trace), it works for complete URL with this following request : index=os_win_wks sourcetype="os_win_wks:java:trace" | rex field=_raw (?<URL>https?:\/\/\S+) but I want to stop to the first "/" with this one, then I have an error message : index=os_win_wks  sourcetype="os_win_wks:java:trace" | rex (?<url>https?:\/\\[^:\/]+) Error => Error in 'SearchParser': Missing a search command before '^'. Error at position '75' of search query 'search index=* sourcetype="os_win_wks:java:trace" ...{snipped} {errorcontext = tps?:\/\\[^:\/]+)}'. Could you help me please ?
Im able to change the font of a particular row but I want to change the font colour of a particular cell depending on a condition. I found a way described in https://blog.avotrix.com/change-font-co... See more...
Im able to change the font of a particular row but I want to change the font colour of a particular cell depending on a condition. I found a way described in https://blog.avotrix.com/change-font-color-of-table-cell-value-in-splunk/ but this involves using JS. Is there any way to do this without using javascript?
Hi, I would appreciate it if someone could assist me with a problem. The events appearing in the indexer on Splunk Cloud are exceeding my license limit. Is there a way to redirect unwanted events to... See more...
Hi, I would appreciate it if someone could assist me with a problem. The events appearing in the indexer on Splunk Cloud are exceeding my license limit. Is there a way to redirect unwanted events to a null queue?