All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello splunkers, while trying to build splunk environment for the first time . I came across the above error message while trying to connect  the 2 indexers to the SH. I also tried to run: ./sp... See more...
Hello splunkers, while trying to build splunk environment for the first time . I came across the above error message while trying to connect  the 2 indexers to the SH. I also tried to run: ./splunk add search-server <ipaddress>:8089 -auth admin:password -remoteUsername admin -remotePassword password. yet running to the same error message:  An error occurred: Error while sending public key to search peer: Connect Timeout root@spunk-sh:/opt/splunk/bin#   someone please help me set the environment . thank you
Can someone please give me a splunk query to split the events for multiple fields? | rex field=_raw " :16R:FIN :35B:ISIN ABC1234567 :93B::AGGR//UNIT/488327,494 :93B::AVAI//UNIT/488326, :16S:FIN :... See more...
Can someone please give me a splunk query to split the events for multiple fields? | rex field=_raw " :16R:FIN :35B:ISIN ABC1234567 :93B::AGGR//UNIT/488327,494 :93B::AVAI//UNIT/488326, :16S:FIN :16R:FIN :35B:ISIN CDE1234567 :93B::AGGR//FAMT/352000, :93B::AVAI//FAMT/352001,  :16S:FIN " I need table as below, i've added max_match in my rex command, but when i input mvexpand for each rex individually they don't split. ISIN                                                         AGGR                                              AVAI ABC1234567                                     488327,494                                488326, CDE1234567                                     352000 ,                                        352001, Report: |rex field=_raw max_match=0 "35B:ISIN(?<ISIN>.{10})" |rex field=_raw max_match=0 "AGGR//(?<AGGR>.{1,20})" |rex field=_raw max_match=0 "AVAI//(?<AVAI>.{1,20})" |table ISIN AGGR AVAI
I am running a very big report which is on 95% after 36 hours and I see that the results size is ~ 2GB and the results should be sent by email  how I can find the results once the report is complet... See more...
I am running a very big report which is on 95% after 36 hours and I see that the results size is ~ 2GB and the results should be sent by email  how I can find the results once the report is completed since I think the mail will fail due to the size 
I have a table of applications like this,   How can I display the table like in below image,
Will custom command created using python reduce search performance For example, If i try to write alternate script for |spath command, comparing to spath will custom command reduce the search tim... See more...
Will custom command created using python reduce search performance For example, If i try to write alternate script for |spath command, comparing to spath will custom command reduce the search time or increase it??
how can i change the id of a dashboard? The name and description  i can edit but how to change the dashboard id after it has been created?
I am trying to hide RED, GREEN and YELLOW, but the xml css is not working for me. <form> <row> <panel> <html> <style> #tbl_Summary tbody td div.multivalue-subcell[data-mv-index="1"] {display:... See more...
I am trying to hide RED, GREEN and YELLOW, but the xml css is not working for me. <form> <row> <panel> <html> <style> #tbl_Summary tbody td div.multivalue-subcell[data-mv-index="1"] {display: none;} </style> </html> <table id="tbl_Summary"> <title>Summary</title> <search> <query> index=*xyz | eval calsuc=case(match('code',"1"), "SUCCESS", match('code',"2"), "WARNING", match('code',"1"), "FAILURE") | dedup requestId | eval APPLICATION=case(like('apn',"/PROFILE"),"PROFILE") | stats Count as "Total Count" count(eval(calsuc="SUCCESS")) as "TotalSuccess" count(eval(calsuc="WARNING")) as "TotalWarning" count(eval(calsuc="FAILURE")) as "TotalFailure" | rename TotalSuccess as S, TotalWarning as W, TotalFailure as F | eval SuccessPerc=round(((S)/(S+W+F)),100,2) | eval sign=round(SuccessPerc, 0) | eval colorCd= if(sign>=95,"GREEN",if(95>sign AND sign>=80,"YELLOW", "RED")) | eval ApplicationName=APPLICATION."|".'colorCd' </query> <earliest>$sltd_tm.earliest$</earliest> <latest>$sltd_tm.latest$</earliest> </search> <option name="count">20</option> <option name="drilldown">row</option> <format type="color"> <colorPalette type="expression"> case (match(value,"RED"), "#DC4E41", match(value,"YELLOW"),"#F88E34",match(value,"GREEN"),"#53A051") </colorPalette> </format> </table> </panel> </row> </form>  
With events, I can do       | search index=foo *bar*       This will match any event containing the string "bar" regardless where it appears.  But with |inputlookup, this will not wor... See more...
With events, I can do       | search index=foo *bar*       This will match any event containing the string "bar" regardless where it appears.  But with |inputlookup, this will not work. I can work around it using foreacch.  But it looks rather labored.       | inputlookup mylookup | foreach * [| search <<FIELD>>=*bar*]       Is this the best way?
Hello team, I am new to using SPLUNK. I have a little problem. After installing my splunk server, I started setting up my universal-forwarder on the kali linux client. where I type the commands... See more...
Hello team, I am new to using SPLUNK. I have a little problem. After installing my splunk server, I started setting up my universal-forwarder on the kali linux client. where I type the commands below: ──(root㉿kali)-[/opt/splunkforwarder/bin] └─# ./splunk add forward-server 192.168.0.24:9997 Splunk username: root Password: Login failedI have an error message on the login/password for your information: - ping is OK between client and server - I connect to the web interface of the server on the client. - The port opening parameters on the server 9997/8089 ok - I don't think I have a login/password problem since it's the same one I use on the server. - I changed the password to see if it's not a keyboard problem at the letter level without success Do you have an idea please? Regards
Hello everyone. I'm trying to find the most efficient way to filter results for a list of values that may have a match within two (or more) distinct fields. Say, a list of IP addresses that can match... See more...
Hello everyone. I'm trying to find the most efficient way to filter results for a list of values that may have a match within two (or more) distinct fields. Say, a list of IP addresses that can match either the source or destination fields. I'm almost certain this question has been answered in the past but I couldn't find the correct wording to find the answer (am braindead atm), so here I am.   To help filter some potential answers to my request, I do know that I can do something like: index=my_index ...etc... (field1 IN (value1, value2, value3, value4, value5, ...) OR field2 IN (value1, value2, value3, value4, value5, ...))   However, what I am attempting to do is make this query more "efficient". Or, perhaps just less of an eye-sore. Provided a list of 10+ values to filter for, it's easy to see how this query can get out of hand at least visually. Here is an example query I intuitively tried that should help illustrate what I'm looking for: index=my_index ...etc... (field1 OR field2 IN (value1, value2, value3, value4, value5, ...))   Splunk Pros, please help: What am I overlooking/overthinking? Or is my first example the best (or most "efficient") way to go about this?   Thanks so much!
So the goal is to to have a drilldown perform an eval which sets a token with the "lookup-dataset" value for another panel to use in performing the lookup. Lookup documentation states the syntax i... See more...
So the goal is to to have a drilldown perform an eval which sets a token with the "lookup-dataset" value for another panel to use in performing the lookup. Lookup documentation states the syntax is:   | lookup <lookup-dataset> ...   The eval includes a case command containing 3 match commands - the results of this should be set into a token.   <drilldown> <unset token="lookup_token"></unset> <eval token="lookup_token">case(match("$row.field1$","abc"),"lookup1",match("$row.field1$","def"),"lookup2",match("$row.field1$","xyz"),"lookup1")</eval> </drilldown>   And another panel does:   <search depends="$lookup_token$" base="the_base_search"> <query>| lookup $lookup_token$ lookup_value as field_value</query> </search>   But the eval doesn't appear to be re-evaluating on clicking a new row within the first panel - checking the search jobs shows it only ever seems to get the first eval value. Is there something I am missing here?
In ES 6.6.x and higher, what is the meaning of "Parse Domain from URL" under the Global Setting of Threat Intelligence Management?  Does it try to parse the domain from the URL which are the IOCs/thr... See more...
In ES 6.6.x and higher, what is the meaning of "Parse Domain from URL" under the Global Setting of Threat Intelligence Management?  Does it try to parse the domain from the URL which are the IOCs/threat artifacts, thus creating more domain IOCs, or is it trying to parse the logs (or Web.url where the events are) to get the domain? I know that in the older version, the "Threat Gen" searches would search for domain IOCs in the Web.url field, but I don't think the new version is doing that anymore.
i am getting error for this  index=_internal earliest="26/02/2022:00:00:00" latest=now()  
Hi everyone, I have a Splunk Enterprise standalone instance. It is running on Ubuntu server 14.04.6 LTS. I recently upgraded from 6.5 to 7.2 and from there to 8.1.0. There are a few custom apps in... See more...
Hi everyone, I have a Splunk Enterprise standalone instance. It is running on Ubuntu server 14.04.6 LTS. I recently upgraded from 6.5 to 7.2 and from there to 8.1.0. There are a few custom apps installed on this Splunk instance. Like a month ago, I realized that the searches to the _audit index weren't returning any result.  /opt/splunk/var/log/splunk/audit.log file is receiving data, and permissions are ok. It seems like the data is not being monitored and injected in the _audit index. I have already checked and compared backup files to find the missing link here, but no luck, and I can't find any significant error in the splunkd.log file. Please, any suggestions?
I want to pick up logs from the same directory that have *.out and *.log in them, is there a way to create one monitor statement with a whitelist that would pick up *.log or *.out or can I use a sele... See more...
I want to pick up logs from the same directory that have *.out and *.log in them, is there a way to create one monitor statement with a whitelist that would pick up *.log or *.out or can I use a selection at the end of the monitor like http://xxx/*.log/*.out?
Hello, are there any queries we can use to find the Total Number of Events, Total Size/Volume (in GB) of Data, Frequencies of data coming into SPLUNK by index and sourcetype. Any help will be highl... See more...
Hello, are there any queries we can use to find the Total Number of Events, Total Size/Volume (in GB) of Data, Frequencies of data coming into SPLUNK by index and sourcetype. Any help will be highly appreciated, thank you!    
  02-24-2022 21:24:10.711 INFO ScopedTimer [9796 searchOrchestrator] - search.optimize 0.030224023 02-24-2022 21:24:10.711 INFO SearchPhaseGenerator [9796 searchOrchestrator] - Failed to create pha... See more...
  02-24-2022 21:24:10.711 INFO ScopedTimer [9796 searchOrchestrator] - search.optimize 0.030224023 02-24-2022 21:24:10.711 INFO SearchPhaseGenerator [9796 searchOrchestrator] - Failed to create phases using AST:Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1.. Falling back to 2 phase mode. 02-24-2022 21:24:10.711 INFO SearchPhaseGenerator [9796 searchOrchestrator] - Executing two phase fallback for the search=| dbxquery query="SELECT * FROM \"ngcs2_0\".\"public\".\"responder\"" connection="PROV_DB_WA_2.0" timeout=6000 02-24-2022 21:24:10.711 INFO SearchParser [9796 searchOrchestrator] - PARSING: | dbxquery query="SELECT * FROM \"ngcs2_0\".\"public\".\"responder\"" connection="PROV_DB_WA_2.0" timeout=6000 02-24-2022 21:24:10.712 INFO ChunkedExternProcessor [9796 searchOrchestrator] - Running process: /export/home/splunk/splunk/bin/python3.7 /export/home/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery_bridge.py 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: File "/export/home/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery_bridge.py", line 125, in <module> 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: main() 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: File "/export/home/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery_bridge.py", line 121, in main 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: bridge = DbxQueryBridge(sys.argv) 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: File "/export/home/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery_bridge.py", line 65, in _init_ 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: self.sock.connect(('localhost', port)) 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: ConnectionRefusedError: [Errno 111] Connection refused 02-24-2022 21:24:10.741 ERROR ChunkedExternProcessor [9796 searchOrchestrator] - EOF while attempting to read transport header read_size=0 02-24-2022 21:24:10.741 ERROR ChunkedExternProcessor [9796 searchOrchestrator] - Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1. 02-24-2022 21:24:10.741 ERROR SearchPhaseGenerator [9796 searchOrchestrator] - Fallback to two phase search failed:Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1. 02-24-2022 21:24:10.743 ERROR SearchStatusEnforcer [9796 searchOrchestrator] - sid:1645766650.38_B885E1F4-85FA-453C-A035-E8DCD64B223F Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1. 02-24-2022 21:24:10.743 INFO SearchStatusEnforcer [9796 searchOrchestrator] - State changed to FAILED due to: Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1. 02-24-2022 21:24:10.744 INFO SearchStatusEnforcer [9796 searchOrchestrator] - Enforcing disk quota = 10485760000 02-24-2022 21:24:10.747 INFO DispatchStorageManager [9796 searchOrchestrator] - Remote storage disabled for search artifacts. 02-24-2022 21:24:10.747 INFO DispatchManager [9796 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='1645766650.38_B885E1F4-85FA-453C-A035-E8DCD64B223F', username='admin') 02-24-2022 21:24:10.747 INFO UserManager [9796 searchOrchestrator] - Unwound user context: admin -> NULL 02-24-2022 21:24:10.747 INFO SearchStatusEnforcer [9789 RunDispatch] - SearchStatusEnforcer is already terminated 02-24-2022 21:24:10.747 INFO UserManager [9789 RunDispatch] - Unwound user context: admin -> NULL 02-24-2022 21:24:10.747 INFO LookupDataProvider [9789 RunDispatch] - Clearing out lookup shared provider map 02-24-2022 21:24:10.749 ERROR dispatchRunner [28370 MainThread] - RunDispatch::runDispatchThread threw error: Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1.
Hi. I need to download Splunk Enterprise 7.x.x to install in a test environment, to test some compatibility issues. Where can we find this old versions? I can only see 8.x.x in older versions. ... See more...
Hi. I need to download Splunk Enterprise 7.x.x to install in a test environment, to test some compatibility issues. Where can we find this old versions? I can only see 8.x.x in older versions. I really need 7.x.x versions. Thanks.
I have an external API subscription that I want to call when a specific field in my Splunk event is present (e.g. City_Name). The REST API call would query the external API for <City_Name> and add th... See more...
I have an external API subscription that I want to call when a specific field in my Splunk event is present (e.g. City_Name). The REST API call would query the external API for <City_Name> and add the returned data (in JSON format) into Splunk to enrich the event. I've seen something similar with using "lookup" but looking for a tutorial on how to build this so that when the event field is present, the external API can be called to download the additional enrichment data. Suggestions / tutorials on how I might go about implementing this in Splunk? Thanks.
There is probably a simple solution to this, but unfortunately I was not able to find the answer in the documentation, nor by searching the community. I am injecting events into Splunk, with a cert... See more...
There is probably a simple solution to this, but unfortunately I was not able to find the answer in the documentation, nor by searching the community. I am injecting events into Splunk, with a certain JSON structure, e.g.   [ { "foo": { "k1": 1, "k2": 2 }, "bar": { "m1": 5, "m2": 6 }, "string1": "hi", "string2": "bye" }, { "foo": { "k1": 11, "k2": 22 }, "bar": { "m1": 55, "m2": 66 }, "string1": "hi2", "string2": "bye2" }, ... and so on ... ]       I can nicely search these events in Splunk, e.g. by | where foo.k1 > 10 Now when searching through the REST API, I can specify which fields I would like to get, e.g. with | fields string1, foo | fields - _* The problem I am having is as follows: When specifying the field "foo" - which has a map (or some other complex structure) in the above naive way, I am not getting any contents from it in my search result (the results are nicely visible in the event view of the Splunk web UI - but in the REST API) When using fields foo*, I am getting an expanded result: { "foo.k1": 1, "foo.k2": 2 } I tried spath, like in: | spath output=myfoo path=foo | fields myfoo | fields - _* which however gives me a string that contains JSON: {"myfoo": "{\"k1\": 1,\"k2\": 2}"} The above are all sub-optimal; I would like to get a search result which is pure JSON, and preserves the structure of the "foo" field, so that I would get: { ..., "foo": { "k1": 1, "k2": 2 }, ... } Or in other words: I would like to pass through some of the event content as is to the result, such that I would get a nice hierarchical data structure when parsing the JSON search result. Thanks a lot for your valuable advice!