All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My goal is to match whatever is after "Commit Description:" up until but not including the " after TASK0123456. I don't want to match the specific TASK[0-9]*, just whatever someone enters in a Palo A... See more...
My goal is to match whatever is after "Commit Description:" up until but not including the " after TASK0123456. I don't want to match the specific TASK[0-9]*, just whatever someone enters in a Palo Alto commit description.  I am using the following regex string and it shows to match fine in the regex query field extraction page.   (?<=Description:\s)(?P<pansys_commitdes>\w+)   However, my issue is when doing a search in Splunk, the field shows up and seems to work correctly...until I use it in a search. My goal is to effectively look for pansys_commitdes!="*CHG*" OR pansys_commitdes!="*TASK*" with the intent of creating a report of changes that are not following change control processes. With these searches I get no matches. I also tried pansys_commitdes=* and pansys_commitdes="*" and also get no matches...but if I do pansys_commitdes=TASK0123456 it matches fine. This is an example of a log I'm trying to match on:   Feb 22 09:08:52 paloalto.contoso.com 1,2021/02/22 09:08:52,01234567890,SYSTEM,general,0,2021/02/22 09:08:52,,general,,0,0,general,informational,"CommitAll job started processing. Dequeue time=2021/02/22 09:08:52. JobId=3938587.User: abc . Commit Description: TASK0123456",22356363,0x0,0,0,0,0,,pan-01                
I have two query that is exact same except the use of the lookup for each search. The one query includes data from a lookup and the other one excludes data from the same lookup. Is there a way I can ... See more...
I have two query that is exact same except the use of the lookup for each search. The one query includes data from a lookup and the other one excludes data from the same lookup. Is there a way I can combine two queries into one. The first one is   index=abc dest="xyz.com" uri_path="access.html" http_method=POST NOT [| inputlookup filter_ips | fields src] | stats count by _time src   The second one is   index=abc dest="xyz.com" uri_path="access.html" http_method=POST [| inputlookup filter_ips | fields src] | stats count by _time src   The only difference is the Not in the first one. Can someone help me combine it? I tried using braces around the searches and combining it but didnt work. Example   (index=abc dest="xyz.com" uri_path="access.html" http_method=POST NOT [| inputlookup filter_ips | fields src] | eval test= a1) OR (index=abc dest="xyz.com" uri_path="access.html" http_method=POST [| inputlookup filter_ips | fields src] | eval test=a2) | stats count by _time src test   But it gives error as eval expression malfunction.
Hi, I have an automatic process that daily writes  some information in a CSV file [1]. Then I have a dashboard that picks up some data and uses xyseries so that I can see the evolution by day. [2] ... See more...
Hi, I have an automatic process that daily writes  some information in a CSV file [1]. Then I have a dashboard that picks up some data and uses xyseries so that I can see the evolution by day. [2] Now I want to calculate the difference between everyday, but the problem is that I don't have "field" names so that I can use in an Eval, so I can I solve this? [3] Please see examples below. ################################################ [1] Information Saved every day [ | is the separator of each column] [Item_Name] | [Date_1] | [Total] 2021-01-18 Item 1 | |2021-01-18 | 32 Item 2 | |2021-01-18 | 50 Item 3 | |2021-01-18 | 10 Item 4 | |2021-01-18 | 15 2021-01-19 Item 1 | |2021-01-18 | 29 Item 2 | |2021-01-18 | 37 Item 3 | |2021-01-18 | 8 Item 4 | |2021-01-18 | 10 2021-01-20 Item 1 | |2021-01-20 | 31 Item 2 | |2021-01-20 | 25 Item 3 | |2021-01-20 | 5 Item 4 | |2021-01-20 | 13 ################################################ | inputlookup blabla | xyseries Item_Name Date_1 Total [2] Applying xyseries I get the following               2021-01-18      2021-01-19       2021-01-20  Item 1 32                          29                          31 Item 2 50                          37                          25 Item 3  10                          8                            5 Item 4 15                          10                          13 ################################################ [3] What I would like to have               2021-01-18      2021-01-19       2021-01-20       Dif_18_19       Dif_19_20  Item 1 32                          29                          31                           -3 (29-32)        2 (31-29) Item 2 50                          37                          25                           -13 (37-50)     -12 (25-37) Item 3  10                          8                            5                              -2 (8-10)        -3 (5-8) Item 4 15                          10                          13                           -5 (10-15)        3 (13-10) ################################################ Thanks in advance. Best Regards,
Hello. I've recently installed Security Essentials within my Spunk instance that is receiving Palo Alto, Cisco switch and Active Directory log files. I can see all the data just fine in each respecti... See more...
Hello. I've recently installed Security Essentials within my Spunk instance that is receiving Palo Alto, Cisco switch and Active Directory log files. I can see all the data just fine in each respective app. Each of their data models is accelerated.  I've installed the CIM, accelerated the relevant data models, set up Security Essentials and then go to the CIM Compliance Check and I get the following error: Error in 'sseidenrichment' command: External search command exited unexpectedly with non-zero error code 1 Thoughts?  
Hi , Please help on this @niketn the below 2 rows as single panel search by employeeid(hyperlink) search by app(hyperlink) once clicked on above  hyperlinks it should open new search with se... See more...
Hi , Please help on this @niketn the below 2 rows as single panel search by employeeid(hyperlink) search by app(hyperlink) once clicked on above  hyperlinks it should open new search with search query index = x  | search employeeid =123 index= x | search app = abc   Thanks in advance
Hi Ninja's,   I have created the alert with the cron expression for Scheduled Alert from 6pm to 6am for every 15mints & Sunday to Thursday. Cron Expression: */15 18-6 **7-4  But I am getting the ... See more...
Hi Ninja's,   I have created the alert with the cron expression for Scheduled Alert from 6pm to 6am for every 15mints & Sunday to Thursday. Cron Expression: */15 18-6 **7-4  But I am getting the error in Cron Expression. I have written similar one (*/15 1-13 **7-4 ) which is working as expected. Can someone please help me with the expression. Thank you.
Hello, Can DBConnect connect to DB2 running on Windows ? or it only works with DB2 running on Linux ?  
Hello, I have created an alert in splunk and a connector webhook in Teams to get alerts. I provided URL of webhook in alert setup. I added email as well to get alert. I am getting emails when alert ... See more...
Hello, I have created an alert in splunk and a connector webhook in Teams to get alerts. I provided URL of webhook in alert setup. I added email as well to get alert. I am getting emails when alert is triggered, but  alerts on Teams are not received. Do we need to do any other changes to allow this? Any rules to be updated?
Are the daily limits enforced right away or after an average period? For example if I am testing importing 5 GB but it will be a one time import rather than a stream, then how is that handled? I wil... See more...
Are the daily limits enforced right away or after an average period? For example if I am testing importing 5 GB but it will be a one time import rather than a stream, then how is that handled? I will not exceed the average daily limit over 30 days. Is this handled as one violation or several violations until 10 days after where the average is within the limit?
I just got Splunk ingesting some data and now I'm getting this error message. I found several posts related to this but they are 6 years old and I think they are out of date. One of them said to ... See more...
I just got Splunk ingesting some data and now I'm getting this error message. I found several posts related to this but they are 6 years old and I think they are out of date. One of them said to go to the server.conf file and edit this stanza, but I don't see that stanza in the file. Is that stanza in another location?  [diskUsage] minFreeSpace = <num>  
I am using a table of results  a | b | c | search | d | e =============================================== xx yy zzz index=firstindex bb ppp yyy qqq eeee index... See more...
I am using a table of results  a | b | c | search | d | e =============================================== xx yy zzz index=firstindex bb ppp yyy qqq eeee index=secondindex rr sss ttt zxc asd index=thirdindex uy mmm based on each result,  I would like to perform a  foreach command to loop through each row of results based on the "search" field and perform a subsearch based on the VALUES in the "search" field,  from a coding's perspective it would be something like  for each row: if field= search: #use value in search [search value | return index to main search] it should evaluate to something like this for each row if field=search: [search index=index1 | return index] My desired output is:  index ============== firstindex secondindex thirdindex    Is this possible? I have tried using  foreach * [eval if <<FIELD>>=="search"[search <<FIELD>>] ","[search <<FIELD>>]] but this does not seem to work.  I am aware of the map command, however as my field results have the word index= before the actual index name, I am unable to do a  search ======================== index=firstindex index=secondindex index=thirdindex |map search="search index=$search$"  as I believe ^ would  resolve to map search="search index=index=firstindex " This would be an error. Is there anyway I can do something like  |map search="search $search$| stats values(index)" and have it return something like index ========== firstindex secondindex thirdindex Tried looking around in splunk community forums but they seem to point at map instead of foreach, I am really lost in how I can get around this issue and achieving my desired output, it would be great if someone with more splunk experience can assist me
Context: existing Splunk installation I'm working with is not very robust when handling search requests due to sheer volume of searchable events. The question here is - is there a way to make splunk... See more...
Context: existing Splunk installation I'm working with is not very robust when handling search requests due to sheer volume of searchable events. The question here is - is there a way to make splunk disregard default sorting behavior and return first N found matches as quickly as possible? The goal here is to use this in conjunction with head clause to make search return first matches as quickly as possible - it is totally OK if events would be presented without prior by-time sorting. So the expectation is that this approach should make search near-instantaneous provided that filtering expression is broad enough and first N matches could be found very quickly.  
Currently utilizing Splunk Cloud and have what is most likely a new user question.  I have a .csv file that is continuously updated at random intervals. Uploading a new version of the .csv to Splunk... See more...
Currently utilizing Splunk Cloud and have what is most likely a new user question.  I have a .csv file that is continuously updated at random intervals. Uploading a new version of the .csv to Splunk every time is not viable. What is the best practice to keep a file like that up to date in Splunk? I have been unable to find a app or tool to connect the two platforms so exporting the .csv from one and uploading to splunk is the current process.  Thank you  
Hello All, I have an alert that has an Action to create a SNow Incident when a file has not been received by a certain date. The first time the alert ran and the action was triggered, a new Inciden... See more...
Hello All, I have an alert that has an Action to create a SNow Incident when a file has not been received by a certain date. The first time the alert ran and the action was triggered, a new Incident was created.  Subsequently, the original Incident has been reopened.  This is not what I want.  I need to have a new Incident for each time the action is triggered. I have tried setting 'state' to 1, which implies New in the documentation here - https://docs.splunk.com/Documentation/AddOns/released/ServiceNow/Commandsandscripts state No Number 4 The state of the incident. For example: 1 - New 2 - Active 3 - Awaiting Problem 4 - Awaiting User Info 5 - Awaiting Evidence 6 - Resolved 7 - Closed I have also left the Correlation ID blank so that a new (unique) ID is created. Correlation ID A unique ID to support third-party application integration. It should only consist of alphanumeric characters, underscore (_), and hyphen (-) in its value. Leave blank for the Splunk Add-on for ServiceNow to generate a unique ID for you.   Thanks in advance for support
We have setup a distributed sandbox system with release 8.1.2. We have configured scripted authentication on our search head based on the PAM scripts located in $SPLUNK_HOME/share/splunk/authScriptS... See more...
We have setup a distributed sandbox system with release 8.1.2. We have configured scripted authentication on our search head based on the PAM scripts located in $SPLUNK_HOME/share/splunk/authScriptSamples. We are using the userMapping.py, pamScripted.py and a compiled version of pamauth.c in our setup. We have made some minor modifications to pamScripted.py to make sure that the script only returns users that are in userMapping.py instead of everyone in /etc/passwd, otherwise we have just followed the configuration guidelines in Securing the Splunk Platform manual. When a user tries to login we get errors in splunkd.log like these TypeError: memoryview: a bytes-like object is required, not 'str' The full sequence of error messages are ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/pamscripts/pamScripted.py userLogin': usage: Traceback (most recent call last): ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/pamscripts/pamScripted.py userLogin': File "/opt/splunk/pamscripts/pamScripted.py", line 177, in <module> ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/pamscripts/pamScripted.py userLogin': userLogin( dictIn ) ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/pamscripts/pamScripted.py userLogin': File "/opt/splunk/pamscripts/pamScripted.py", line 60, in userLogin ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/pamscripts/pamScripted.py userLogin': output = proc.communicate( infoIn['password'] ) ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/pamscripts/pamScripted.py userLogin': File "/opt/splunk/lib/python3.7/subprocess.py", line 964, in communicate ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/pamscripts/pamScripted.py userLogin': stdout, stderr = self._communicate(input, endtime, timeout) ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/pamscripts/pamScripted.py userLogin': File "/opt/splunk/lib/python3.7/subprocess.py", line 1695, in _communicate ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/pamscripts/pamScripted.py userLogin': input_view = memoryview(self._input) ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/pamscripts/pamScripted.py userLogin': TypeError: memoryview: a bytes-like object is required, not 'str' Seems to me that the pamScripted.py script has not been changed to work under python 3.7. Did anyone manage to make it work in earlier versions of release 8?
hi Splunk community,   Somehow my left join is not working if I select all EntityIDs.   Althought when I select a single IdentityId, it is working...   Any hints on why the first one is... See more...
hi Splunk community,   Somehow my left join is not working if I select all EntityIDs.   Althought when I select a single IdentityId, it is working...   Any hints on why the first one is not working and how I can fix it?    Thanks, Zarrukh       
I am following this blog http://www.georgestarcher.com/splunk-stored-encrypted-credentials/ to delete and create a new encrypted credential (storing api key). But the script fails to delete the exist... See more...
I am following this blog http://www.georgestarcher.com/splunk-stored-encrypted-credentials/ to delete and create a new encrypted credential (storing api key). But the script fails to delete the existing credential and following error is thrown, HTTP 403 forbidden: cannot delete the key. Please help me in solving this issue
I am trying to store credentials in encrypted form as suggested in http://www.georgestarcher.com/splunk-stored-encrypted-credentials/ and https://wiki.splunk.com/Community:40GUIDevelopment . I am aut... See more...
I am trying to store credentials in encrypted form as suggested in http://www.georgestarcher.com/splunk-stored-encrypted-credentials/ and https://wiki.splunk.com/Community:40GUIDevelopment . I am authenticating through session key to list and store storage-passwords and i am getting session key through cherrypy (cherrypy.session.get('sessionKey')). The only issue i am facing right now is when i try to update an existing credential e.g: api key, I am getting error. The script is failing to delete the existing credential. And if it is not deleted, we cannot create a new credential with the same name (updating credential). Any help would be appreciated
Hi, Since some version (now using 8.1.2) I have trouble to use the 'sendemail' command in a search (dashboard/form) for user that have the standard user-roles. This issue is troubling me for almost ... See more...
Hi, Since some version (now using 8.1.2) I have trouble to use the 'sendemail' command in a search (dashboard/form) for user that have the standard user-roles. This issue is troubling me for almost  1.5 year now.  Of course I am aware of the need to select 'list_settings' but had never a results. When selecting 'admin_all_objects' in the standard user-role is succesful.  But using the 'admin_all_objects' for standard user is nothing but a security breach. That can not be the solution , so what do I miss here? An why does Splunk not create a special and straightforward capability for this 'sendemail' command? Ashley Pietersen
By upgrading to splunk v8.0.5, I can no longer use the lookup updater that was previously possible with Sideview Admin Tools. I installed the canary application according to the message, but I don't... See more...
By upgrading to splunk v8.0.5, I can no longer use the lookup updater that was previously possible with Sideview Admin Tools. I installed the canary application according to the message, but I don't know where to look for the Lookup Updater. Is there a way to enable it? How can I do?