All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, So the requirement was to find gaps of data unavailability(start time & end time)  in the  given time range, condition is that if specific weekday have event in  a certain period (say first w... See more...
Hello, So the requirement was to find gaps of data unavailability(start time & end time)  in the  given time range, condition is that if specific weekday have event in  a certain period (say first week of Sunday) and in the same period if other week of same weekday(say Second week of Sunday)  does not have an event then my search still have to consider of having an event during Second Sunday too for calculating duration of data unavailability.
i have Multiple event forwardings enabled on my Phantom App for Splunk that use saved searches to trigger notable events to phantom. I had recently we upgraded the App from ver 4.0.35 to 4.1.73. Wi... See more...
i have Multiple event forwardings enabled on my Phantom App for Splunk that use saved searches to trigger notable events to phantom. I had recently we upgraded the App from ver 4.0.35 to 4.1.73. With this upgrade all field mappings that were saved in the Event forwardings (locally) were erased. and now there are 0 fields that are mapped in the event forwardings. Since almost all the mapped fields in each of the Event forwardings were same, i have re-mapped them manually on one of the event forwarding  and while saving it i have checked the "Save Mappings" option which saves those fields in the global mappings. but now the mappings dont work for all the event forwardings as it should (due to global mapping) but only works for the single event forwarding where the mapping is saved locally. Troubleshooting done: 1. Tried to restore the phantom.conf file - did not work, no mapping was detected after the restore. 2. tried to clone the single event forwarding with locally mapped fields - did not work, as soon as i change the Saved search setting (as shown in screenshot) and save, the mapped fields turns to 0 (this is in the cloned event forwarding), as shown in the 2nd screenshot.   I really dont want to manually map the fields locally since it would result in close to 2000 fields in total (300 fields on each of the 7 event forwardings). Any help on this issue is really appreciated. This is the sample event forwarding config page,  the field mappings gets reset after saving the event forwarding. I am on splunk 8.1.5
Hi , I want to create summary index for the below OS metrics process . How to achieve this.  1.Avg CPU per week*  2.Avg memory per week*  3. Avg /var/log/ % used, per week*  4. # processes run... See more...
Hi , I want to create summary index for the below OS metrics process . How to achieve this.  1.Avg CPU per week*  2.Avg memory per week*  3. Avg /var/log/ % used, per week*  4. # processes running, per week* Thanks  
Hi Experts,                         I wondered the best way of comparing the below data.  So I have a query which returns as so . index=myindex sourcetype=mysourcetype host="myhost" |table process,... See more...
Hi Experts,                         I wondered the best way of comparing the below data.  So I have a query which returns as so . index=myindex sourcetype=mysourcetype host="myhost" |table process, tier, country This returns a 100 or so processes their tier and country as expected.  There is only 4 countries  uk, usa, denmark and spain It returns something like this  process              tier              country process1          roman         uk process2          roman         usa Process3         roman          Denmark process4         anglo            uk process5       anglo              usa process6       anglo             Denmark process7       anglo             spain The roman tier should be present in each country . If Spain is missing as above how to I only show the missing entry for spain as the outlier ? This is basically for a rec purpose so we can see whats missing. thanks in advance !     
I am coming across an interesting problem where notables are being generated for each event in Splunk with unique notable IDs, despite trigger conditions being set and notables being set to "Trigger ... See more...
I am coming across an interesting problem where notables are being generated for each event in Splunk with unique notable IDs, despite trigger conditions being set and notables being set to "Trigger Once". For example, if we are looking for 5 or more failed user login events over the span of 10 minutes, we will receive 5 notable alerts in our queue for each event, despite having counts  set to >=5 and other trigger conditions set, and having trigger set to "once" as opposed to "for each result". It seems like even when the trigger is set to "once", it is still behaving as if it is set to "for each result" Is there a way to set trigger conditions using SPL itself so that only one notable event is generated for a query that yields multiple results so we will only receive one notable?
Hello,  I have the following issue. I have a Search A, that yields me the state of a device. I would like to supplement the state by the information of the command, that leaded to the State A. There... See more...
Hello,  I have the following issue. I have a Search A, that yields me the state of a device. I would like to supplement the state by the information of the command, that leaded to the State A. Therefore I am looking to get the last command of Search B with the same device ID, that is before my Event in Search A.  In order to do this, I have used a left join.     index="IndexA" sourcetype="SourceA" .....|eval time=_time| table time ID state |join type=left left=A right=B usetime=true earlier=true where A.ID=B.ID [search index="IndexA" sourcetype="SourceB" ...|eval time=_time| table time ID command| sort _time-] |timediff='A.time'-'B.time'      Now I have the following issues: Is there a direct way to access the internal field _time? (Using 'A._time' doesn't work, which is why I am saving it in an own field named time) Somehow, if I don't use table at the end of the search command, i cannot access the value of time in the subsearch B using 'B.time' . What is the reason for this? And most important: I am getting results of the subsearch B, which are newer than my Event in the Search A. Is this because I used sort inside the subsearch? Thanks and best Regards
Hello everyone! I'm looking for assistance with fine-tuning Enterprise Security. I've been working hard with configuring ES to start generating notable events.  We're getting lots of events! Almos... See more...
Hello everyone! I'm looking for assistance with fine-tuning Enterprise Security. I've been working hard with configuring ES to start generating notable events.  We're getting lots of events! Almost 73k Access Notables, 66 Endpoint Notables, 2.4k Network Notables, 0 Identity Notables, 11 Audit Notables, and 3.8k Threat Notables.  What does a typical fine-tuning entail? Finding out what is a false-positive and modifying the correlation searches to ignore certain criteria? What else could I be missing? 
Is there any way we can inject data to one running Splunk enterprise(on premise) to another through search API? I can find the configured search APIs for Splunk (https://docs.splunk.com/Documentation... See more...
Is there any way we can inject data to one running Splunk enterprise(on premise) to another through search API? I can find the configured search APIs for Splunk (https://docs.splunk.com/Documentation/Splunk/8.1.2/RESTTUT/RESTsearches) , But searching for a way to inject data through these endpoints without using forwarder .Is this possible? 
Hi everyone, Im using Splunk APP https://splunkbase.splunk.com/app/1546/ I want to split a single JSON array event to multiple events by the word"addrRef". if you see my JSON array example and a... See more...
Hi everyone, Im using Splunk APP https://splunkbase.splunk.com/app/1546/ I want to split a single JSON array event to multiple events by the word"addrRef". if you see my JSON array example and at the end I wrote Response handler which is not working but is not sending any error either when I look at my "_internal" logs. If anyone could tell me why my reponse handler is not working or what Im doing wrong? Best regards   Json Array { [-] result: { [-] ipamRecords: [ [-] { [-] addrRef: IPAMRecords/248211 address: 10.1.1.20 claimed: false customProperties: { [+] } device: dhcpLeases: [ [+] ] dhcpReservations: [ [+] ] discoveryType: ARP dnsHosts: [ [+] ] extraneousPTR: false interface: lastDiscoveryDate: Feb 3, 2022 08:11:04 lastKnownClientIdentifier: AB:BA:CA:FF:EA:66 lastSeenDate: Feb 3, 2022 07:55:17 ptrStatus: OK state: Assigned usage: 25140 } { [-] addrRef: IPAMRecords/357310 address: 10.2.2.21 claimed: false customProperties: { [+] } device: dhcpLeases: [ [+] ] dhcpReservations: [ [+] ] discoveryType: Ping dnsHosts: [ [+] ] extraneousPTR: false interface: lastDiscoveryDate: Feb 2, 2022 13:40:17 lastKnownClientIdentifier: BA:BB:AA:B5:28:AC lastSeenDate: Nov 3, 2017 17:07:34 ptrStatus: OK state: Assigned usage: 24596 } { [+] } { [+] } { [+] } { [+] } { [+] } ] totalResults: 7 } } MY RESPONSE HANDLER NOT WORKING, BUT NOT GIVING ANY ERROR ON  "index=_interna host=Myhost": vi /opt/splun/etc/apps/rest_ta/bin/responsehandlers.py class MenAndMiceHandler: def __init__(self,**args): pass def __call__(self, response_object,raw_response_output,response_type,req_args,endpoint): if response_type == "json": output = json.loads(raw_response_output) for addrRef in output: print_xml_stream(json.dumps(addrRef)) else: print_xml_stream(raw_response_output)  
Hello, I have set up a saas trial account and followed the steps end to end for the below : Windows 2019 Server -  Agent runs successfully and connects to the saas instance but no metrics showing... See more...
Hello, I have set up a saas trial account and followed the steps end to end for the below : Windows 2019 Server -  Agent runs successfully and connects to the saas instance but no metrics showing.  Windows 10 Desktop -  Agent runs successfully and connects to the saas instance but no metrics showing. Ubuntu - Agent runs successfully and connects to the saas instance but no metrics showing. machineagent-bundle-64bit-windows-22.1.0.3252 machineagent-bundle-64bit-linux-22.1.0.3252
How can i populate data from primary index to summary index using collect command.  By using collect command can we populate the logs from primary to summary index 
I am trying to collect data from Azure Graph, and CAS API using the Splunk Add-on for Microsoft Office 365 app. I tried this first on a windows server and got this error: 2022-02-03 11:34:12,218 le... See more...
I am trying to collect data from Azure Graph, and CAS API using the Splunk Add-on for Microsoft Office 365 app. I tried this first on a windows server and got this error: 2022-02-03 11:34:12,218 level=INFO pid=7340 tid=MainThread logger=splunksdc.collector pos=collector.py:run:251 | | message="Modular input started." 2022-02-03 11:34:12,508 level=INFO pid=7340 tid=MainThread logger=splunk_ta_o365.common.settings pos=settings.py:load:36 | datainput=b'testsignins' start_time=1643884452 | message="Load proxy settings success." enabled=False host=b'' port=b'' username=b'' 2022-02-03 11:34:12,802 level=INFO pid=7340 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get_v2_token_by_psk:160 | datainput=b'testsignins' start_time=1643884452 | message="Acquire access token success." expires_on=1643888051.8024929 2022-02-03 11:34:13,806 level=DEBUG pid=7340 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api pos=graph_api.py:run:102 | datainput=b'testsignins' start_time=1643884452 | message="Start Retrieving Graph Api Audit Messages." timestamp=1643884453.8066385 report=b'signIns' 2022-02-03 11:34:13,806 level=INFO pid=7340 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get:462 | datainput=b'testsignins' start_time=1643884452 | message="Calling Microsoft Graph API." url=b'https://graph.microsoft.com/v1.0/auditLogs/signIns' params=None 2022-02-03 11:34:21,628 level=ERROR pid=7340 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api pos=graph_api.py:run:118 | datainput=b'testsignins' start_time=1643884452 | message="Error retrieving Cloud Application Security messages." exception=Invalid format string 2022-02-03 11:34:21,628 level=ERROR pid=7340 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api pos=utils.py:wrapper:72 | datainput=b'testsignins' start_time=1643884452 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\splunk_ta_o365\bin\splunksdc\utils.py", line 70, in wrapper return func(*args, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_ta_o365\bin\splunk_ta_o365\modinputs\graph_api.py", line 235, in run return consumer.run() File "C:\Program Files\Splunk\etc\apps\splunk_ta_o365\bin\splunk_ta_o365\modinputs\graph_api.py", line 114, in run self._ingest(message, source) File "C:\Program Files\Splunk\etc\apps\splunk_ta_o365\bin\splunk_ta_o365\modinputs\graph_api.py", line 125, in _ingest expiration = int(message.update_time.strftime('%s')) ValueError: Invalid format string 2022-02-03 11:34:21,632 level=INFO pid=7340 tid=MainThread logger=splunksdc.collector pos=collector.py:run:254 | | message="Modular input exited." Authenication seems to be working but it looks like it returns an unexpected string value it can't handle. I tested the azure app and CAS token using powershell and no issues. So last ditch effort was to try on another server. This happend to be a Linux server. When i set the app up there everything worked without issues. This made me think that the Graph and CAS inputs does not work on Windows servers since this was the only difference.  So i tested on an another windows server and got the same error. So I wondered if anyone else here has the same result as me, or has managed to get this running on a windows server? The app in splunk says it is platform independent, so it should run on windows to.    
So I have a particular number of important csv files that I need to ensure have no errors - which I can lookup using the cmd: find . -name "bad_ips.csv" -exec 2>/dev/null echo {} \; -exec grep -n ",... See more...
So I have a particular number of important csv files that I need to ensure have no errors - which I can lookup using the cmd: find . -name "bad_ips.csv" -exec 2>/dev/null echo {} \; -exec grep -n ",," {} \; | grep -B 1 "^1:" (I run about 5 of these against the csv files I am most interested in - with 'bad_ips' as an example)  I am after a way of automating this and being able to run each day and see it in a Splunk dashboard. - is this possible?
As far as I know, the size of the ITSI or ES license a customer buys should be equal to the basic Splunk Enterprise license. But what if a customer wants to have a single Splunk Enterprise installat... See more...
As far as I know, the size of the ITSI or ES license a customer buys should be equal to the basic Splunk Enterprise license. But what if a customer wants to have a single Splunk Enterprise installation dedicated to different uses? For example, a customer buys a 3TB license of which it expects to use 1TB for security related events, 1TB for serivice monitoring and the rest for other uses, mostly business intelligence. Pricing ITSI and ES for 3TB each seems a bit expensive. Does license pool help here in any way? But even if so, the license pool is allocated per indexer, not per index if I remember correctly. So that would mean the necessity to install separate indexer clusters for each of those uses.
Hi All, I have below splunk data: "new request: 127.0.0.1;url=login.jsp" which contains the IPADDRESS (EX:127.0.0.1) and the URL (login.jsp)   I want to show a table which displays Number of req... See more...
Hi All, I have below splunk data: "new request: 127.0.0.1;url=login.jsp" which contains the IPADDRESS (EX:127.0.0.1) and the URL (login.jsp)   I want to show a table which displays Number of requests made to (login.jsp) from every IPADDRESS on minute basis like below :   TimeStamp(Minutes)  IPADDRESS  COUNT 2022-01-13 22:03:00 ipaddress1 count1 2022-01-13 22:03:00 ipaddress2 count2 2022-01-13 22:03:00 ipaddress3 count3 2022-01-13 22:04:00 ipaddress1 count1 2022-01-13 22:04:00 ipaddress2 count2   which displays the count in descending order. Please advise how to achieve this ?   Thanks 2022-01-13 22:04:00 ipaddress3 count3
I am trying to identify the values that are in the logs not matching with content in the lookup file. But i am not getting the results. Here is the sample log 192.168.198.92 - - [22/Dec/2002:23:08:... See more...
I am trying to identify the values that are in the logs not matching with content in the lookup file. But i am not getting the results. Here is the sample log 192.168.198.92 - - [22/Dec/2002:23:08:37 -0400] "GET / HTTP/1.1" 200 6394 www.yahoo.com  "xab|xac|za1" 192.168.198.92 - - [22/Dec/2002:23:08:38 -0400] "GET /images/logo.gif HTTP/1.1" 200 807 www.yahoo.com  "None" 192.168.198.92 - - [22/Dec/2002:23:08:37 -0400] "GET / HTTP/1.1" 200 6394 www.yahoo.com  "xab|xac|za1" 192.168.72.177 - - [22/Dec/2002:23:32:14 -0400] "GET /news/Tshirts.html HTTP/1.1" 200 3500 www.yahoo.com  "yif" 192.168.72.177 - - [22/Dec/2002:23:32:14 -0400] "GET /news/Jeans.html HTTP/1.1" 200 3500 www.yahoo.com  "zab|yif|ba1|ba1" 192.168.72.177 - - [22/Dec/2002:23:32:14 -0400] "GET /news/Polos.html HTTP/1.1" 200 3500 www.yahoo.com  "zab|yif" the last value of the log( "xab|xac|za1") is stored as signature field in splunk. which says multiple signatures matched the requests. For few requests only one signature might have triggered. I would like to compare the signatures in the logs with the list of signatures in the lookup table. example lookup: lookup table signature.csv and it contains these values: signature_lookup xab yab xac zac zal yif zab bal I have tried multiped queries for splitting and checking for those signatures in lookup file and if it not matched then only that result should display. But i am getting both matched and unmatched content as query result. Don't know where i am doing the mistake. index=* source type=* NOT(signature="None") |makemv delim = "|" signature |mvexpand signature |lookup signature.csv signature_lookup |search signature!=signature_lookup |table signature | dedup signature Also tried below query but no luck... index=* sourcetype=* NOT(signature="None") |eval sign_split=mvindex(split(signature,"|"),0) |lookup signature.csv signature_lookup as sign_split |table signature | dedup signature Can some one help me in resolving this
Hello, I have got 2 data sets resides in same index but with different source/host:   index="tickets" host="RMM_DATA" index="tickets" source="fs_webhooks"     first data set contains multiple ... See more...
Hello, I have got 2 data sets resides in same index but with different source/host:   index="tickets" host="RMM_DATA" index="tickets" source="fs_webhooks"     first data set contains multiple ticket fields, including ID.  the second data set only contains one field ID. I want to display all the events where ID value is available only in the 1st data set but not in the 2nd data set. The below query gives me LEFT JOIN - but I want to get IDs (and related fields) those only exist in the first data set.    index="tickets" host="RMM_DATA" | sort 0 -_time | dedup ID | where DepartmentName!="XYZ" AND DepartmentName!="MNO" AND Status!="Closed" AND Status!="Resolved" AND Priority="Urgent" AND Type="Incident" | table ID Type DepartmentName "Created Date" Location Priority Subject Queue Status Analyst "Last Updated" | join type=left ID [search index="tickets" source="fs_webhooks" | rename freshdesk_webhook.ticket_id as ID | sort 0 -_time | dedup ID | table ID] | table ID Type DepartmentName "Created Date" Location Priority Subject Queue Status Analyst "Last Updated"     Can you please suggest how I can achieve this? Thank you.  
Hello All, I am working on building use cases for PCI compliance , Just got to know that splunk has an PCI compliance app for  checking that clients data is PCI compliant or not . Just wondering if... See more...
Hello All, I am working on building use cases for PCI compliance , Just got to know that splunk has an PCI compliance app for  checking that clients data is PCI compliant or not . Just wondering if i can get sample data from somewhere to test My use cases and run the PCI compliance app as well .   Thanks in advance Manish Kumar
Username status  User1       login User2       login User3       login  User1     logout  User1     login User1    logout  Now for login user there are 2 count  And for logout user there are ... See more...
Username status  User1       login User2       login User3       login  User1     logout  User1     login User1    logout  Now for login user there are 2 count  And for logout user there are 1 count  If i have logs Like above i mentioned . Can you please help me to get the ans which i mentioned  above as per last status of users. 
I need to run three different queries based on the each respective results.  for example : 1) In the first one query : index * search | top result.  so let's say I pick the first result which is "... See more...
I need to run three different queries based on the each respective results.  for example : 1) In the first one query : index * search | top result.  so let's say I pick the first result which is "abc" 2) In second query I use the first result and inject it in here index=* search result=abc | top status 3) Use the second result and inject it in the third search index=* search result=abc status=xyz | timechart count by "something"   I am not sure if there is easier way to do it or this would take more time and bandwidth. Any help would be really helpful. Need some guidance here.