All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi, i want enrich my helm-charts with readiness and liveness probes, but could not find any publicy available information on how to check the health/liveliness of those splunk containers. anyone ha... See more...
hi, i want enrich my helm-charts with readiness and liveness probes, but could not find any publicy available information on how to check the health/liveliness of those splunk containers. anyone happens to know how to do that?   thanks. florian.
Hi Experts, I am new in SNMP / snmp_ta app. I am facing difficulty to convert MIB to python module. build-pysnmp-mib: command not found libsmi2pysnmp: command not found  My OS details: NAME="Am... See more...
Hi Experts, I am new in SNMP / snmp_ta app. I am facing difficulty to convert MIB to python module. build-pysnmp-mib: command not found libsmi2pysnmp: command not found  My OS details: NAME="Amazon Linux AMI" VERSION="2018.03" ID="amzn" ID_LIKE="rhel fedora" VERSION_ID="2018.03" PRETTY_NAME="Amazon Linux AMI 2018.03" ANSI_COLOR="0;33" CPE_NAME="cpe:/o:amazon:linux:2018.03:ga" HOME_URL="http://aws.amazon.com/amazon-linux-ami/" and  in my system I have pysnmp.__version__ 4.4.12  
Splunk is getting duplicate data from Azure when using Cost and consumption Rest API. how can we fix this ?
Hello, I have an architecture like this : Splunk Universal forwarder 1_N => Splunk Indexer 1 => Splunk Search Head 0 Splunk Universal forwarder 1-N => Splunk Indexer 2 => Splunk Search Head 0 ... See more...
Hello, I have an architecture like this : Splunk Universal forwarder 1_N => Splunk Indexer 1 => Splunk Search Head 0 Splunk Universal forwarder 1-N => Splunk Indexer 2 => Splunk Search Head 0 Splunk Universal forwarder 1-N => Splunk Indexer N => Splunk Search Head 0 I would like to know if i could forward data from Splunk Search Head to a third party software I know there is apps like CEP. But i would like to forward data to Splunk Indexer for indexing data aand forward data from Splunk Indexer to Splunk Search Head, and finaly forward data from SplunkSearch Head to a third party software. I don't want to forward from Splunk Forwarder drectly to third party software. I would like a single point (Splunk Search Head) to forward to third party software. May be , i make a mistake with this choice. What the best practice with a good security to avoid exposing all the Splunk Forwarder or all the Splunk indexer to the third party software. I'm sorry for my bad english. Thank you very much for your help.  
I've created a search-driven lookup on Splunk ES, then I try to create an automatic lookups with the new lookup file. I found error messages as follow: [indexer1] Could not load lookup=LOOKUP-looku... See more...
I've created a search-driven lookup on Splunk ES, then I try to create an automatic lookups with the new lookup file. I found error messages as follow: [indexer1] Could not load lookup=LOOKUP-lookup_definitions_name [indexer2] Could not load lookup=LOOKUP-lookup_definitions_name [indexer3] Could not load lookup=LOOKUP-lookup_definitions_name ... How can I resolve this issue?
Hello splunkers, I need your help to properly configure VirusTotal TA 2.0.0 (https://splunkbase.splunk.com/app/4283/) - it works fine for admin role, but not power user - even though permissions are... See more...
Hello splunkers, I need your help to properly configure VirusTotal TA 2.0.0 (https://splunkbase.splunk.com/app/4283/) - it works fine for admin role, but not power user - even though permissions are configured ./metadata/ .meta [commands/virustotal] export = system access = read : [ admin, power ], write : [ admin ] I played with those, but it didn't help: [commands/virustotal] export = system access = read : [ admin, power ], write : [ admin, power ] But still getting errors for power user: 07-15-2020 17:15:40.008 +1000 ERROR ChunkedExternProcessor - stderr: AuthenticationError at "/opt/splunk/etc/apps/TA-VirusTotal/bin/splunklib/binding.py", line 303 : Request failed: Session is not logged in. 07-15-2020 17:15:40.008 +1000 ERROR ChunkedExternProcessor - stderr: File "/opt/splunk/etc/apps/TA-VirusTotal/bin/splunklib/searchcommands/search_command.py", line 740, in _process_protocol_v2 07-15-2020 17:15:40.008 +1000 ERROR ChunkedExternProcessor - stderr: File "/opt/splunk/etc/apps/TA-VirusTotal/bin/virustotal.py", line 539, in prepare 07-15-2020 17:15:40.008 +1000 ERROR ChunkedExternProcessor - stderr: File "/opt/splunk/etc/apps/TA-VirusTotal/bin/splunklib/client.py", line 1262, in __iter__ 07-15-2020 17:15:40.008 +1000 ERROR ChunkedExternProcessor - stderr: File "/opt/splunk/etc/apps/TA-VirusTotal/bin/splunklib/client.py", line 1425, in iter 07-15-2020 17:15:40.008 +1000 ERROR ChunkedExternProcessor - stderr: File "/opt/splunk/etc/apps/TA-VirusTotal/bin/splunklib/client.py", line 1655, in get 07-15-2020 17:15:40.008 +1000 ERROR ChunkedExternProcessor - stderr: File "/opt/splunk/etc/apps/TA-VirusTotal/bin/splunklib/client.py", line 753, in get 07-15-2020 17:15:40.008 +1000 ERROR ChunkedExternProcessor - stderr: File "/opt/splunk/etc/apps/TA-VirusTotal/bin/splunklib/binding.py", line 303, in wrapper
Hi, we configured transform.conf, props.conf and fields.conf file while pushing the events into main index. In that time indexed fields are created and when using tstats command is also working fine... See more...
Hi, we configured transform.conf, props.conf and fields.conf file while pushing the events into main index. In that time indexed fields are created and when using tstats command is also working fine. No we are trying to move the search matched events into summary index from main index by using collect command. In this time our main indexed fields are coming into the summary index but, those fields are not acting as indexed fields in summary index. And when we are try to use tstats on summary index also not working on those fields. Can you please help us to resolve this problem. What we need is, the collected fields also act as indexed fields in summary index. Please correct us if we missing something.   Thanks & Reguards Nagendra D
The scenario is that I want to wrap around an existing app (ServiceNow) that make it easier for analysts to use manually - abstracting the sys_id values of teams and request types. I see the docs sa... See more...
The scenario is that I want to wrap around an existing app (ServiceNow) that make it easier for analysts to use manually - abstracting the sys_id values of teams and request types. I see the docs say you can't call act() from a custom function. I'm guessing, based on a phenv python3; import phantom you can't call act from there either.
Hi Guys, I am trying find changes in office 365 ip address and URL using SPL by comparing results from today to yesterday. Probably there is an efficient way of doing this too!   Script: index... See more...
Hi Guys, I am trying find changes in office 365 ip address and URL using SPL by comparing results from today to yesterday. Probably there is an efficient way of doing this too!   Script: index=dp source="rest://Query" earliest=-1d@d latest=now | stats values(tcpPorts) as tcpPorts_t values(udpPorts) as udpPorts_t values(ips{}) as ips_t by urls{} |  appendcols     [search index=dp source="rest://Query" earliest=-2d@d latest=-1d@d | stats values(tcpPorts) as tcpPorts_y values(udpPorts) as udpPorts_y values(ips{}) as ips_y by  urls{} ] | eval change=if("tcpPorts_t"="tcpPorts_y" OR "udpPorts_t"="udpPorts_y" or "ips_t"="ips_y", "Change", "No Change") | join type=left  change     [search index=dp source="rest://Query" earliest=-1d@d latest=now | stats values(tcpPorts) as tcpPorts_t values(udpPorts) as udpPorts_t values(urls{}) as urls{}_t by ips{} |  appendcols     [search index=dp source="rest://Query" earliest=-2d@d latest=-1d@d | stats values(tcpPorts) as tcpPorts_y values(udpPorts) as udpPorts_y values(urls{}) as urls{}_y by ips{} ] | eval change=if("tcpPorts_t"="tcpPorts_y" OR "udpPorts_t"="udpPorts_y" or "urls{}_t"="urls{}_y", "Change", "No Change") ] | table  change tcpPorts_t tcpPorts_y udpPorts_t udpPorts_y ips_t ips_y urls{}_t urls{}_y |  sort - change   Ip address are appearing ok but getting just 1 value for url. Not too sure if Makemv will help here?
Following db query not working. | dbquery wmsewprd select REC_TYPE, CODE_TYPE, CODE_DESC, SHORT_DESC, USER_ID, To_Char(mod_date_time,'dd/mm/yyyy hh:mi:ss AM') AS MOD_DATE_TIME from SYS_CODE_TYPE whe... See more...
Following db query not working. | dbquery wmsewprd select REC_TYPE, CODE_TYPE, CODE_DESC, SHORT_DESC, USER_ID, To_Char(mod_date_time,'dd/mm/yyyy hh:mi:ss AM') AS MOD_DATE_TIME from SYS_CODE_TYPE where rec_type = 'C' and code_type = 'AWO' and (sysdate - mod_date_time)*24*60 < 60"   Getting following error. command="dbquery", A database error occurred: ORA-00942: table or view does not exist   But the table SYS_CODE_TYPE does exist.   When we run the following query it is returning events. dbquery wmsewprd "select REC_TYPE, CODE_TYPE, CODE_DESC, SHORT_DESC, USER_ID from SYS_CODE_TYPE" What could be the problem
Hi In my dashboard I use a first dropdown list with static values These values make reference to the SITE fields I use in my different reports with the lookup below :   | lookup TOTO.csv HOST... See more...
Hi In my dashboard I use a first dropdown list with static values These values make reference to the SITE fields I use in my different reports with the lookup below :   | lookup TOTO.csv HOSTNAME as host output SITE     Now, I would be able to update a second dropdown list from the first need It means that when I select the SITE field in the first drilldown, I would like to display in the second dropdown list  all the host corresponding to the SITE selected in the first dropdown list  Could you help me please?
デプロイメントサーバ上のデプロイAPP内のconfファイルを SplunkwebのGUI上でデータの追加から、モニターを選択し*.confファイルをモニターしたいと思っています。 この方法でデータ取り込みをした場合、 モニターしたログをインデックスサーバへ送信せず、デプロイメントサーバ内にインデックスすることは可能でしょうか?
I have two lookup user.csv and roles.csv, I'm trying to collate both tables and make a table which shows indexname and username which are using these index. It's getting difficult for me to get accur... See more...
I have two lookup user.csv and roles.csv, I'm trying to collate both tables and make a table which shows indexname and username which are using these index. It's getting difficult for me to get accurate data as "splunk_user" is common for all the username and it shows up while I run below query -  |inputlookup user.csv | lookup roles.csv roles outputnew indexes | table indexes , username Can someone please help in getting this query right? OR any alternate solution to find all the index and users using those index. user.csv - "splunk_user" is common for all the username. username,roles abc,"splunk_user index2_user" def,"splunk_user" xyz,"splunk_user index1_power" klm,"splunk_user" pqr,"splunk_user index2_power"   roles.csv roles,indexes "splunk_user","index_all index_3 index_4 index_5" "index1_user","index_1" "index1_power","index_1" "index2_user","index_2" "index2_power","index_2"
In the Splunk lookup editor https://splunkbase.splunk.com/app/1724/ version 3.4.2 when I hit save on a CSV file on Splunk 8.0.4.1, I see a javascript error:     TableEditorView.js:273 Uncaught Typ... See more...
In the Splunk lookup editor https://splunkbase.splunk.com/app/1724/ version 3.4.2 when I hit save on a CSV file on Splunk 8.0.4.1, I see a javascript error:     TableEditorView.js:273 Uncaught TypeError: Cannot read property '0' of null at child.prepareForSaving (TableEditorView.js:273) at child.getData (TableEditorView.js:284) at child.doSaveLookup (LookupEditView.js:985) at HTMLDivElement.dispatch (common.js:1063) at HTMLDivElement.elemData.handle (common.js:1063)       I can replicate this if I create a lookup and leave some fields blank, and once it appears the "saving" is stuck there forever and the save never completes. I can also replicate it on various random lookup files for editing, randomly a reload and a save works, and sometimes it does not . Attaching 1 example screenshot...CTRL+E is a nice trick by the way! The only temporary workaround appears to be load the lookup, then CTRL+F5, and sometimes this works...however if you load the lookup or create  new lookup the issue occurs I can consistently re-create the issue for new lookups, it's *usually* when there are blank entries or no rows filled out...but we also have the exact same issue when all rows do appear to be filled out... Finally copy/paste from excel appears to be an issue but the javascript error looks fairly consistent
Hi , I am trying to pass the value from one panel to drop down using drill down, issue i am getting is when using round function with eval. <drilldown> <set token="form.tok_status">$click.name2$</... See more...
Hi , I am trying to pass the value from one panel to drop down using drill down, issue i am getting is when using round function with eval. <drilldown> <set token="form.tok_status">$click.name2$</set> <eval token="form.tok_time">round($click.value$,0)</eval> // when passing the token directly i am abe to                                                                                                                               get the value , but when using round its not working </drilldown>
Hi All,  Needs your info and suggestion, can we use this app https://splunkbase.splunk.com/app/3491/#/details  to get network hierarchy automatically? like network auto discovered?  If not, does ... See more...
Hi All,  Needs your info and suggestion, can we use this app https://splunkbase.splunk.com/app/3491/#/details  to get network hierarchy automatically? like network auto discovered?  If not, does splunk has network auto discovery feature or mechanism?  
Hello all, I would like to exclude the following windows event log on the universal forwarder.       07/15/2020 08:38:55 AM LogName=Microsoft-Windows-PowerShell/Operational SourceName=Microsoft-... See more...
Hello all, I would like to exclude the following windows event log on the universal forwarder.       07/15/2020 08:38:55 AM LogName=Microsoft-Windows-PowerShell/Operational SourceName=Microsoft-Windows-PowerShell EventCode=4103 EventType=4 Type=Information ComputerName=HA-AGM-DB-01.SSI.LOCAL User=NOT_TRANSLATED Sid=S-1-5-21-2993187273-2588912068-3154952105-14529 SidType=0 TaskCategory=Executing Pipeline OpCode=To be used when operation is just executing a method RecordNumber=90748 Keywords=None Message=CommandInvocation(Out-Host): "Out-Host" CommandInvocation(Out-Default): "Out-Default" ParameterBinding(Out-Default): name="Transcript"; value="True" Context: Severity = Informational Host Name = ApmPSHost Host Version = 1.0 Host ID = 85e424db-4fce-46cb-90d4-bace72bb3e2a Host Application = SWJobEngineWorker2.exe 3e167b33-0a26-4f7e-9964-e38b1e939cc6 6612 AgentPlugin SolarWinds.APM.Probes Engine Version = 5.1.14393.3383 Runspace ID = 3c6aab92-96b5-464d-8659-0e81de6d4ec9 Pipeline ID = 1 Command Name = Command Type = Script Script Name = Command Path = Sequence Number = 193 User = xxxxx Connected User = Shell ID = Microsoft.PowerShell User Data:       Tried with this blacklist : blacklist = EventCode="4103" Message="Host\sApplication\s=*SolarWinds.APM.Probes"  
Hi,   I am using below REST API   https://splunk-api-url:8089/servicesNS/nobody/appname/search/jobs/export?output_mode=json&count=1&search=|savedsearch%20savedsearchname%20|search%20ProjectCode=1NN... See more...
Hi,   I am using below REST API   https://splunk-api-url:8089/servicesNS/nobody/appname/search/jobs/export?output_mode=json&count=1&search=|savedsearch%20savedsearchname%20|search%20ProjectCode=1NN I need to include time in the above URL - Please let me know how to do it, in the saved search we have DateAndTime values, when I try to fetch the DateAndTime as I am fetching the Project Code its not working https://splunk-api-url:8089/servicesNS/nobody/appname/search/jobs/export?output_mode=json&count=1&search=|savedsearch%20savedsearchname%20|search%20ProjectCode=1NN%20|search%20DateAndTime=2020-07-14%2022:20     
I have approx. 35k buckets under Fixup Tasks Pending (35k). Under Fixup Category Search Factor Current Status - Missing enough suitable candidates to create searchable copy in order to meet replic... See more...
I have approx. 35k buckets under Fixup Tasks Pending (35k). Under Fixup Category Search Factor Current Status - Missing enough suitable candidates to create searchable copy in order to meet replication policy. Missing={ default:1 } Time in Fixup - 17 hour(s) 8 minute(s) Fixup Reason -  unmet rf Under Fixup Category Replication Factor Status - No possible srcs for replication Time in Fixup - 17 hour(s) 8 minute(s) Fixup Reason - unmet rf Env. details - 3 x Indexers in a cluster, 1 ES, 2 regular non-clustered SHs, 1 DS RF=2 SF=2 Third indexer was just newly added 4 days ago. Can someone please advise the cause of these issues and how I can fix this ? Background - after adding new (third) Indexer, there was some storage issue due to which it went to automatic detention but which we eventually fixed it and brought back to cluster from detention. All Indexers are up and running. After that, to fix the above issue, based on some post I did rolling restart and restarted CM but still no change to the above.
I was wondering if there was a report that could be run that would aggregate errors. Basically I can see a list of errors but I'd like to prioritize the errors that happen the most frequently. We use... See more...
I was wondering if there was a report that could be run that would aggregate errors. Basically I can see a list of errors but I'd like to prioritize the errors that happen the most frequently. We used to have something like this in NewRelic, and I've been searching for something similar in AppD but haven't found it.