All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How can i populate data from primary index to summary index using collect command.  By using collect command can we populate the logs from primary to summary index 
I am trying to collect data from Azure Graph, and CAS API using the Splunk Add-on for Microsoft Office 365 app. I tried this first on a windows server and got this error: 2022-02-03 11:34:12,218 le... See more...
I am trying to collect data from Azure Graph, and CAS API using the Splunk Add-on for Microsoft Office 365 app. I tried this first on a windows server and got this error: 2022-02-03 11:34:12,218 level=INFO pid=7340 tid=MainThread logger=splunksdc.collector pos=collector.py:run:251 | | message="Modular input started." 2022-02-03 11:34:12,508 level=INFO pid=7340 tid=MainThread logger=splunk_ta_o365.common.settings pos=settings.py:load:36 | datainput=b'testsignins' start_time=1643884452 | message="Load proxy settings success." enabled=False host=b'' port=b'' username=b'' 2022-02-03 11:34:12,802 level=INFO pid=7340 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get_v2_token_by_psk:160 | datainput=b'testsignins' start_time=1643884452 | message="Acquire access token success." expires_on=1643888051.8024929 2022-02-03 11:34:13,806 level=DEBUG pid=7340 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api pos=graph_api.py:run:102 | datainput=b'testsignins' start_time=1643884452 | message="Start Retrieving Graph Api Audit Messages." timestamp=1643884453.8066385 report=b'signIns' 2022-02-03 11:34:13,806 level=INFO pid=7340 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get:462 | datainput=b'testsignins' start_time=1643884452 | message="Calling Microsoft Graph API." url=b'https://graph.microsoft.com/v1.0/auditLogs/signIns' params=None 2022-02-03 11:34:21,628 level=ERROR pid=7340 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api pos=graph_api.py:run:118 | datainput=b'testsignins' start_time=1643884452 | message="Error retrieving Cloud Application Security messages." exception=Invalid format string 2022-02-03 11:34:21,628 level=ERROR pid=7340 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api pos=utils.py:wrapper:72 | datainput=b'testsignins' start_time=1643884452 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\splunk_ta_o365\bin\splunksdc\utils.py", line 70, in wrapper return func(*args, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_ta_o365\bin\splunk_ta_o365\modinputs\graph_api.py", line 235, in run return consumer.run() File "C:\Program Files\Splunk\etc\apps\splunk_ta_o365\bin\splunk_ta_o365\modinputs\graph_api.py", line 114, in run self._ingest(message, source) File "C:\Program Files\Splunk\etc\apps\splunk_ta_o365\bin\splunk_ta_o365\modinputs\graph_api.py", line 125, in _ingest expiration = int(message.update_time.strftime('%s')) ValueError: Invalid format string 2022-02-03 11:34:21,632 level=INFO pid=7340 tid=MainThread logger=splunksdc.collector pos=collector.py:run:254 | | message="Modular input exited." Authenication seems to be working but it looks like it returns an unexpected string value it can't handle. I tested the azure app and CAS token using powershell and no issues. So last ditch effort was to try on another server. This happend to be a Linux server. When i set the app up there everything worked without issues. This made me think that the Graph and CAS inputs does not work on Windows servers since this was the only difference.  So i tested on an another windows server and got the same error. So I wondered if anyone else here has the same result as me, or has managed to get this running on a windows server? The app in splunk says it is platform independent, so it should run on windows to.    
So I have a particular number of important csv files that I need to ensure have no errors - which I can lookup using the cmd: find . -name "bad_ips.csv" -exec 2>/dev/null echo {} \; -exec grep -n ",... See more...
So I have a particular number of important csv files that I need to ensure have no errors - which I can lookup using the cmd: find . -name "bad_ips.csv" -exec 2>/dev/null echo {} \; -exec grep -n ",," {} \; | grep -B 1 "^1:" (I run about 5 of these against the csv files I am most interested in - with 'bad_ips' as an example)  I am after a way of automating this and being able to run each day and see it in a Splunk dashboard. - is this possible?
As far as I know, the size of the ITSI or ES license a customer buys should be equal to the basic Splunk Enterprise license. But what if a customer wants to have a single Splunk Enterprise installat... See more...
As far as I know, the size of the ITSI or ES license a customer buys should be equal to the basic Splunk Enterprise license. But what if a customer wants to have a single Splunk Enterprise installation dedicated to different uses? For example, a customer buys a 3TB license of which it expects to use 1TB for security related events, 1TB for serivice monitoring and the rest for other uses, mostly business intelligence. Pricing ITSI and ES for 3TB each seems a bit expensive. Does license pool help here in any way? But even if so, the license pool is allocated per indexer, not per index if I remember correctly. So that would mean the necessity to install separate indexer clusters for each of those uses.
Hi All, I have below splunk data: "new request: 127.0.0.1;url=login.jsp" which contains the IPADDRESS (EX:127.0.0.1) and the URL (login.jsp)   I want to show a table which displays Number of req... See more...
Hi All, I have below splunk data: "new request: 127.0.0.1;url=login.jsp" which contains the IPADDRESS (EX:127.0.0.1) and the URL (login.jsp)   I want to show a table which displays Number of requests made to (login.jsp) from every IPADDRESS on minute basis like below :   TimeStamp(Minutes)  IPADDRESS  COUNT 2022-01-13 22:03:00 ipaddress1 count1 2022-01-13 22:03:00 ipaddress2 count2 2022-01-13 22:03:00 ipaddress3 count3 2022-01-13 22:04:00 ipaddress1 count1 2022-01-13 22:04:00 ipaddress2 count2   which displays the count in descending order. Please advise how to achieve this ?   Thanks 2022-01-13 22:04:00 ipaddress3 count3
I am trying to identify the values that are in the logs not matching with content in the lookup file. But i am not getting the results. Here is the sample log 192.168.198.92 - - [22/Dec/2002:23:08:... See more...
I am trying to identify the values that are in the logs not matching with content in the lookup file. But i am not getting the results. Here is the sample log 192.168.198.92 - - [22/Dec/2002:23:08:37 -0400] "GET / HTTP/1.1" 200 6394 www.yahoo.com  "xab|xac|za1" 192.168.198.92 - - [22/Dec/2002:23:08:38 -0400] "GET /images/logo.gif HTTP/1.1" 200 807 www.yahoo.com  "None" 192.168.198.92 - - [22/Dec/2002:23:08:37 -0400] "GET / HTTP/1.1" 200 6394 www.yahoo.com  "xab|xac|za1" 192.168.72.177 - - [22/Dec/2002:23:32:14 -0400] "GET /news/Tshirts.html HTTP/1.1" 200 3500 www.yahoo.com  "yif" 192.168.72.177 - - [22/Dec/2002:23:32:14 -0400] "GET /news/Jeans.html HTTP/1.1" 200 3500 www.yahoo.com  "zab|yif|ba1|ba1" 192.168.72.177 - - [22/Dec/2002:23:32:14 -0400] "GET /news/Polos.html HTTP/1.1" 200 3500 www.yahoo.com  "zab|yif" the last value of the log( "xab|xac|za1") is stored as signature field in splunk. which says multiple signatures matched the requests. For few requests only one signature might have triggered. I would like to compare the signatures in the logs with the list of signatures in the lookup table. example lookup: lookup table signature.csv and it contains these values: signature_lookup xab yab xac zac zal yif zab bal I have tried multiped queries for splitting and checking for those signatures in lookup file and if it not matched then only that result should display. But i am getting both matched and unmatched content as query result. Don't know where i am doing the mistake. index=* source type=* NOT(signature="None") |makemv delim = "|" signature |mvexpand signature |lookup signature.csv signature_lookup |search signature!=signature_lookup |table signature | dedup signature Also tried below query but no luck... index=* sourcetype=* NOT(signature="None") |eval sign_split=mvindex(split(signature,"|"),0) |lookup signature.csv signature_lookup as sign_split |table signature | dedup signature Can some one help me in resolving this
Hello, I have got 2 data sets resides in same index but with different source/host:   index="tickets" host="RMM_DATA" index="tickets" source="fs_webhooks"     first data set contains multiple ... See more...
Hello, I have got 2 data sets resides in same index but with different source/host:   index="tickets" host="RMM_DATA" index="tickets" source="fs_webhooks"     first data set contains multiple ticket fields, including ID.  the second data set only contains one field ID. I want to display all the events where ID value is available only in the 1st data set but not in the 2nd data set. The below query gives me LEFT JOIN - but I want to get IDs (and related fields) those only exist in the first data set.    index="tickets" host="RMM_DATA" | sort 0 -_time | dedup ID | where DepartmentName!="XYZ" AND DepartmentName!="MNO" AND Status!="Closed" AND Status!="Resolved" AND Priority="Urgent" AND Type="Incident" | table ID Type DepartmentName "Created Date" Location Priority Subject Queue Status Analyst "Last Updated" | join type=left ID [search index="tickets" source="fs_webhooks" | rename freshdesk_webhook.ticket_id as ID | sort 0 -_time | dedup ID | table ID] | table ID Type DepartmentName "Created Date" Location Priority Subject Queue Status Analyst "Last Updated"     Can you please suggest how I can achieve this? Thank you.  
Hello All, I am working on building use cases for PCI compliance , Just got to know that splunk has an PCI compliance app for  checking that clients data is PCI compliant or not . Just wondering if... See more...
Hello All, I am working on building use cases for PCI compliance , Just got to know that splunk has an PCI compliance app for  checking that clients data is PCI compliant or not . Just wondering if i can get sample data from somewhere to test My use cases and run the PCI compliance app as well .   Thanks in advance Manish Kumar
Username status  User1       login User2       login User3       login  User1     logout  User1     login User1    logout  Now for login user there are 2 count  And for logout user there are ... See more...
Username status  User1       login User2       login User3       login  User1     logout  User1     login User1    logout  Now for login user there are 2 count  And for logout user there are 1 count  If i have logs Like above i mentioned . Can you please help me to get the ans which i mentioned  above as per last status of users. 
I need to run three different queries based on the each respective results.  for example : 1) In the first one query : index * search | top result.  so let's say I pick the first result which is "... See more...
I need to run three different queries based on the each respective results.  for example : 1) In the first one query : index * search | top result.  so let's say I pick the first result which is "abc" 2) In second query I use the first result and inject it in here index=* search result=abc | top status 3) Use the second result and inject it in the third search index=* search result=abc status=xyz | timechart count by "something"   I am not sure if there is easier way to do it or this would take more time and bandwidth. Any help would be really helpful. Need some guidance here.
When I checked the lookup command with "WILDCARD", the command doesn't work if the file size becomes large. Does anyone know some related settings or something? I'm using splunk version "8.2.2.1". ... See more...
When I checked the lookup command with "WILDCARD", the command doesn't work if the file size becomes large. Does anyone know some related settings or something? I'm using splunk version "8.2.2.1". The situation is shown below. transforms.conf       [lookup_test] batch_index_query = 1 case_sensitive_match = 0 filename = lookup_test.csv match_type = WILDCARD(field) max_matches = 1       lookup_test.csv     field "*.example.com" "*.example.com" . . .     (I used the same word repeatedly for checking.) search query I just want to match domain with "WILDCARD"       | makeresults annotate=true | eval _raw="domain aa.example.com" | multikv forceheader=1 | table domain | lookup lookup_test field as domain OUTPUTNEW field as field_result      So the expected result is below domain field_result aa.example.com *.example.com   If the "lookup_test.csv" is 620,000 lines(file size is about 9.5MB), the WILDCARD match works fine. But if the "lookup_test.csv" is 630,000 lines(file size is about 9.7MB), the WILDCARD match doesn't work. I mean the "field_result" value becomes blank. And it only match "EXACT". domain field_result aa.example.com   *.example.com *.example.com   I also tried other words(ex. "*.aexample.com"). If the lookup file size becomes larger than about 9.5MB, the "WILDCARD" match doesn't work and only match "EXACT". So I think this related to lookup file size, but I couldn't find any documents.
Hi Team, I have a situation where I need to base a field value in the normal search query on 'true' or 'false' based on another field example : index=xxx host=xxx sourcetype=xxx productcode=... See more...
Hi Team, I have a situation where I need to base a field value in the normal search query on 'true' or 'false' based on another field example : index=xxx host=xxx sourcetype=xxx productcode="RE" countryid="74321"  what I need is that if the field 'countryid' is equal to '74321' the other field 'foundincache' set to only 'false' if not it should be set to 'true' I tried something like this but it doesnt take the value from 'inscache'.  I mean inscache is not working as a variable  index=xxx host=xxx sourcetype=xxx productcode="RE" countryid="74321"  | eval countryid="70207" | eval inscache=if(countryid=="70207","false","true") | search foundincache=inscache | stats count by foundincache Is there a way to do it I tried google search etc but cant find this anywhere Many thanks in adavance  Nishant  
I am pretty new to Splunk and trying to figure out how alert notification and adding a script to it works. My alert will basically return a line from  a log stream every time it matches my search cr... See more...
I am pretty new to Splunk and trying to figure out how alert notification and adding a script to it works. My alert will basically return a line from  a log stream every time it matches my search criteria, which will be something like this"   process completed for config some_name having RUN_ID 1129 (it could be multiple lines) my goal is to get the "config_name" part from here and send it as a column name into a sql query that either I put it in a bash or python script:   select "config_name" from table; how are the alert result and the script  connected? can someone bring an example? saw few posts (https://community.splunk.com/t5/Alerting/how-to-pass-custom-strings-from-a-Splunk-Alert-into-a-python/m-p/322664) but not quite getting it.... any help would be appreciate it!     
Hi splunk community! Im new to splunk here so im not very clear on the consequences of updating indexes 1. For example, if index1 indexes from file1, but if in the future i want to change it to inde... See more...
Hi splunk community! Im new to splunk here so im not very clear on the consequences of updating indexes 1. For example, if index1 indexes from file1, but if in the future i want to change it to index from file2 instead, will there be any implications if i just update the stanza in input.conf file to direct to file2  instead of file1? or do i need to delete the current index and create a new one and then direct to file2? 2. If i want to add more fields to the stanza of the indexed file, will i need to recreate the index? or can i just add the field to the stanza thank you in advance!
I am upgrading a 6.6.X Splunk Enterprise and following the upgrade manual, I have to upgrade it to version 7.2.x first but... it wasn't listed on the older version download page, and I can't find any... See more...
I am upgrading a 6.6.X Splunk Enterprise and following the upgrade manual, I have to upgrade it to version 7.2.x first but... it wasn't listed on the older version download page, and I can't find anything about upgrading without downloading it first. So if anyone can show me where I can download the 7.2.x version or a workaround to upgrading a 6.6.x Splunk Enterprise I will be grateful.
Hi All Already created a custom Vizualisation. Having an issue with the drilldown.  In XML set the drilldown option:  <option name="drilldown">all</option> inside drilldown:  <drilldown> <set t... See more...
Hi All Already created a custom Vizualisation. Having an issue with the drilldown.  In XML set the drilldown option:  <option name="drilldown">all</option> inside drilldown:  <drilldown> <set token="test1">$click.value$</set> <set token="test2">$click.field$</set> </drilldown> Also set the title of the panel to:  <title>Test1: $test1$, test2: $test2$</title> JS in the custom viz: var payload = { action: SplunkVisualizationBase.FIELD_VALUE_DRILLDOWN, data: {} };   payload.data[field] = value; this.drilldown(payload);   The on click the drilldown sets the tokens to $click.value$ and $click.field$ (the strings) Any help ?  
Hi,  I'm having an issue with my deployer and search head cluster while upgrading enterprise security.  In step 8 of the Splunk doc below, it states that ES will recognize it is being installed on ... See more...
Hi,  I'm having an issue with my deployer and search head cluster while upgrading enterprise security.  In step 8 of the Splunk doc below, it states that ES will recognize it is being installed on a cluster during app setup. Mine displays that it is setting up on a standalone instance, the deployer. What do I need to change for the GUI to recognize I am uploading for the cluster?  See screenshot for additional detail.  Upgrading Enterprise Security in a search head cluster environment, step 8: https://docs.splunk.com/Documentation/ES/7.0.0/Install/UpgradeEnterpriseSecuritySHC Thanks
I have the following query that I am working to establish a prediction for. I am able to be the volume to predict but I wanted to also predict a failure rate for this. When I add in the failure count... See more...
I have the following query that I am working to establish a prediction for. I am able to be the volume to predict but I wanted to also predict a failure rate for this. When I add in the failure count though I do not get any results. I confirmed the query works fine when removing the predict also. Is this a limitation on predict?     |timechart span=15m count as volume, count(eval(match(login, "failure"))) as "failures" | predict volume, failures       I also see, "Invalid time series index: 2".
Hello, how are you guys? I'm trying unsuccessfully, to give access to the Config Explorer app to a user with the Power User role.  The user is having this error when trying to enter the app in the... See more...
Hello, how are you guys? I'm trying unsuccessfully, to give access to the Config Explorer app to a user with the Power User role.  The user is having this error when trying to enter the app in the UI: insufficient permission to access this resource   Maybe is a capabilities problem? Do someone know the needed capabilities in order to being able to enter the Config Explorer app? I know users with the admin role can enter without problems, but power users are not the case. Thanks in advance! 
Hi all, I have an issue here. I was trying to install apps through the deployment server and I noticed that the search head, cluster masters and indexers are missing on the deployment server forward... See more...
Hi all, I have an issue here. I was trying to install apps through the deployment server and I noticed that the search head, cluster masters and indexers are missing on the deployment server forwarder management. The only server that's showing up is our heavy forwarder, I think they disappeared after the upgrade from 8.2.0 to 8.2.4. The heavy forwarder still has version 8.2.0 at the moment. I did a back-up before upgrading and tried to restore but it didn't work. Any assistance would be appreciated. Thank you