All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I can get it working to an extent, not sure if this method will exactly fit your use-case but will leave it here for you. So with a lookup named "test_regex_lookup.csv" pattern_type regex ... See more...
I can get it working to an extent, not sure if this method will exactly fit your use-case but will leave it here for you. So with a lookup named "test_regex_lookup.csv" pattern_type regex date \d{2}\/\d{2}\/\d{4} SSN  \d{3}\-\d{2}\-\d{4} We are able to pull in these regex patterns into a parent search via eval and then use these patterns in another eval to extract data. Example.     | makeresults | eval data="very personal info on John Doe: Birthday: 04/12/1973 and SSN: 123-45-6789" ``` pull in regex patterns from lookup ``` | eval ssn_regex=[ | inputlookup test_regex_lookup.csv where pattern_type="SSN" | fields + regex | eval regex="\"".'regex'."\"" | return $regex ], bday_regex=[ | inputlookup test_regex_lookup.csv where pattern_type="date" | fields + regex | eval regex="\"".'regex'."\"" | return $regex ] ``` use regex pattern fields to extract matches from another field "data" ``` | eval ssn=replace(data, ".*(".'ssn_regex'.").*", "\1"), bday=replace(data, ".*(".'bday_regex'.").*", "\1")      Resulting dataset looks something like this I'm sure there are other methods that can work or we can build upon this method further. I am curious about different ways of doing this as well so will leave updates if I figure out any other methods. Update: Was able to shorten the SPL into a single eval by using the nifty lookup() function   | makeresults | eval data="very personal info on John Doe: Birthday: 04/12/1973 and SSN: 123-45-6789" ``` get regex pattern from lookup and utilize against raw data in another field to extract data into net-new field ``` | eval ssn=replace(data, ".*(".spath(lookup("test_regex_lookup.csv", json_object("pattern_type", "SSN"), json_array("regex")), "regex").").*", "\1"), bday=replace(data, ".*(".spath(lookup("test_regex_lookup.csv", json_object("pattern_type", "date"), json_array("regex")), "regex").").*", "\1")      
Is it possible to store regex patterns in a lookup table so that it can be used in a search? For example lets say I have these following regexes like "(?<regex1>hello)" and "(?<regex2>world)".  My a... See more...
Is it possible to store regex patterns in a lookup table so that it can be used in a search? For example lets say I have these following regexes like "(?<regex1>hello)" and "(?<regex2>world)".  My actual regexes are not simple word matches. I want to write another query that basically runs a bunch of regexes like  | rex field=data "regex1" | rex field=data "regex2" etc | makeresults 1 | eval data="Hello world" [| inputlookup regex.csv | streamstats count | strcat "| rex field=data \"" regex "\"" as regexstring | table regexstring | mvcombine regexstring]​ is it possible to use the subsearch to extract the regexes and then use them as commands in the main query? I was trying something like | makeresults 1 | eval data="Hello world" [| inputlookup regex.csv | streamstats count | strcat "| rex field=data \"" regex "\"" as regexstring | table regexstring | mvcombine regexstring]  so that the subsearch outputs the following  | rex field=data "(?<regex1>hello)" | rex field=data "(?<regex2>world)"
>>> I am running this query against large set of query. Does using for loop and json functions has any limitation in that case ? Like results getting truncated and so ? 1) may we know how large is... See more...
>>> I am running this query against large set of query. Does using for loop and json functions has any limitation in that case ? Like results getting truncated and so ? 1) may we know how large is the data set, so that we can suggest you better.  2) foreach does not have any limitation, whereas the spath got 5000 characters limitation. but the docs has given this overriding method: By default, the spath command extracts all the fields from the first 5,000 characters in the input field. If your events are longer than 5,000 characters and you want to extract all of the fields, you can override the extraction character limit for all searches that use the spath command. To change this character limit for all spath searches, change the extraction_cutoff setting in the limits.conf file to a larger value. https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Spath https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Foreach   PS - Upvotes / like / karma points are appreciated, thanks. 
  | ldapsearch domain="default" search="(&(samAccountType=000000000) (|(sAMAccountName=*)))" attrs="sAMAccountName, distinguishedName, userAccountControl, whenCreated, personalTitle, displayName, gi... See more...
  | ldapsearch domain="default" search="(&(samAccountType=000000000) (|(sAMAccountName=*)))" attrs="sAMAccountName, distinguishedName, userAccountControl, whenCreated, personalTitle, displayName, givenName, sn, mail, telephoneNumber, mobile, manager, department, co, l, st, accountExpires, memberOf" | rex field=memberOf "CN=(?<memberOf_parsed>[^,]+)" | eval memberOf=lower(replace(mvjoin(memberOf_parsed, "|"), " ", "_")) | rex max_match=5 field=distinguishedName "OU=(?<dn_parsed>[^,]+)" | eval category=lower(replace(mvjoin(dn_parsed, "|"), " ", "_")) | eval priority=case(match(category, "domain_admin|disabled|hold|executive") OR match(memberOf, "domain_admins|enterprise_admins|schema_admins|administrators"), "critical", match(category, "contractor|service_account|external"), "high", match(category, "employees|training|user_accounts|users|administration"), "medium", 1==1, "unknown") | eval watchlist=case(match(category,"disabled|hold"), "true", 1==1, "false") | eval startDate=strftime(strptime(whenCreated,"%Y%m%d%H%M"), "%m/%d/%Y %H:%M") | eval endDate=strftime(strptime(accountExpires,"%Y-%m-%dT%H:%M:%S%Z"), "%m/%d/%Y %H:%M") | eval work_city=mvjoin(mvappend(l, st), ", ") | rename sAMAccountName as identity, personalTitle as prefix, displayName as nick, givenName as first, sn as last, mail as email, telephoneNumber as phone,mobile as phone2, manager AS managedBy, department as bunit, co AS work_country | fillnull value="unknown" category, priority, bunit | table identity,prefix,nick,first,last,suffix,email,phone,phone2,managedBy,priority,bunit,category,watchlist,startDate,endDate,work_city,work_country,work_lat,work_long | outputcsv xyz.csv   this the search that is being used to generate a csv file, and yes, it's same addon as you mentioned.  I believe you're right that > they're writing to a directory (on the same host as HF) And ingesting it by using a input. conf file.  Because in cloud we cannot monitor directories directly from cloud instance.  Correct me? thanks
Hello Everyone, We have a Splunk server installed and working. However, a month ago our license expired, and we just renewed it. Unfortunately, it didn't go without issues. We renewed our license, b... See more...
Hello Everyone, We have a Splunk server installed and working. However, a month ago our license expired, and we just renewed it. Unfortunately, it didn't go without issues. We renewed our license, but now the system isn't working, and we are getting the following error message - "Error in 'litsearch' command: Your Splunk license expired, or you have exceeded your license limit too many times." We found the below article. However, we are unable to access it as it requires a Salesforce login. Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. | Splunk (site.com) Can anyone help us figure out how to fix this issue? Thank you, Richard    
Any non-internal indexes could be a summary index to be honest. But like @dtburrows3 said, you'll have to take a look at savedsearches.conf to see what search is using the collect command that writes... See more...
Any non-internal indexes could be a summary index to be honest. But like @dtburrows3 said, you'll have to take a look at savedsearches.conf to see what search is using the collect command that writes to an index. This isn't guaranteed to identify summary indexes but will help you narrow down what indexes to look into. In our environment, our summary indexes are identified with the "summary_" prefix as best practice.
You can try this to get the report in that format. Edit: Noticed that the chart method could mess up the order of dates from left to right so I think sorting first and then doing a transpose shoul... See more...
You can try this to get the report in that format. Edit: Noticed that the chart method could mess up the order of dates from left to right so I think sorting first and then doing a transpose should fix it.           source="/apps/WebMethods/IntegrationServer/instances/default/logs/DFO.log" | timechart span=1d limit=30 count as count by DFOINTERFACE | sort 0 +_time | eval timestamp=strftime(_time, "%m/%d/%Y") | fields + timestamp, * | fields - _* | transpose 30 header_field=timestamp | rename column as "DFOINTERFACE \ Date"           Example from my local instance.  
Can you clarify what technical addon you're using? Also, couldn't you ask your admin to clarify on the question you have originally? If you're using this addon here, then you can write a search usin... See more...
Can you clarify what technical addon you're using? Also, couldn't you ask your admin to clarify on the question you have originally? If you're using this addon here, then you can write a search using the LDAP command to write to an index with the collect command. Otherwise, whatever you're doing with the CSV file and then having a file monitoring to ingest the CSV is the long way to do it.
Hi @dtburrows3  its giving different result. I just want in reverse direction its giving me like this : but I want like this   
Hi All, I am using send email command to send csv file to different recepients based on the search .   | eval subject="This is test subject" ,email_body= "This is test email body" | map search=... See more...
Hi All, I am using send email command to send csv file to different recepients based on the search .   | eval subject="This is test subject" ,email_body= "This is test email body" | map search="|inputcsv test.csv | where owner=\"$email$\" | sendemail sendcsv=true to=\"$email$\" subject=\"$subject$\" message="\$email_body$\""   I want email body as "This is test email body". Instead I am getting "Search Results attached". I understand message depend on the arguments passed. As I am passing sendcsv=true, I am getting this. I am using sendcsv as I am sending results as attachment(csv file). Please let me know how can I pass custom message to email body. Regards, PNV
Did not know about the valid key entries. Thanks for sharing! Came across this documentation after reading your comment. https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/MonitorWindowseventl... See more...
Did not know about the valid key entries. Thanks for sharing! Came across this documentation after reading your comment. https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/MonitorWindowseventlogdata Oof and this right in inputs.conf docs  
"ProcessName" is not a valid key for a blacklist setting.  Valid keys are "Category, CategoryString, ComputerName, EventCode, EventType, Keywords, LogName, Message, OpCode, RecordNumber, Sid, SidType... See more...
"ProcessName" is not a valid key for a blacklist setting.  Valid keys are "Category, CategoryString, ComputerName, EventCode, EventType, Keywords, LogName, Message, OpCode, RecordNumber, Sid, SidType, SourceName, TaskCategory, Type, and User". Also, the RHS must be a valid regular expression.  A valid regex cannot begin with "*".  If you're trying to specify a wildcard at the beginning and end of the match then there's no need - that's implied with most regexes.
Give this a try blacklist3 = EventCode="4673" Process_Name=".*\\DesktopExtension\.exe.*"  From what I'm reading on Splunk docs it seems that it needs to be a valid regex to work. This regex se... See more...
Give this a try blacklist3 = EventCode="4673" Process_Name=".*\\DesktopExtension\.exe.*"  From what I'm reading on Splunk docs it seems that it needs to be a valid regex to work. This regex seems to match properly The original regex you posted doesn't seem to valid according to regex101 Also noticed that the Key you posted "ProcessName" is different then the field I see extracted on windows data on my local machine which is extracted as "Process_Name" but maybe that is how it is coming over in your environment. If that is the case then maybe this could work. blacklist3 = EventCode="4673" ProcessName=".*\\DesktopExtension\.exe.*"
Hi folks,  Happy new year to you all:-) In my org the Splunk deployment is as follows: Heavy forwarders running (HF1, HF2) > Collecting data from directories, HTTP > Sent to Splunk cloud (2 se... See more...
Hi folks,  Happy new year to you all:-) In my org the Splunk deployment is as follows: Heavy forwarders running (HF1, HF2) > Collecting data from directories, HTTP > Sent to Splunk cloud (2 search heads). Case: We have Active Directory add on HF1>which establishes connection to AD> write a CSV file in var/* of the host and > being indexed to the cloud.  admin said we have input which write data to index=asset_identity : I AM NOT SURE WHAT THE ADMIN WAS REFFERING TO? IS IT CONF FILE ON HF? 
Hello all, I am trying to blacklist this app that is generating a ton of Windows Event logs; till I find what app it is and uninstall it. This is for HP's DesktopExtension.exe. The weird thing is th... See more...
Hello all, I am trying to blacklist this app that is generating a ton of Windows Event logs; till I find what app it is and uninstall it. This is for HP's DesktopExtension.exe. The weird thing is that it is only running on about 30 devices.  Here is the current section in inputs.conf :  [WinEventLog://Security] disabled = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist3 = EventCode=4673 ProcessName="*\\DesktopExtension.exe*" renderXml=false index=oswinsec However even after restarting the splunk forwarder the events still appear. I verified one of the hosts has the correct inputs.conf. I have also tried blacklist3 = EventCode=4673 ProcessName="C:\Program Files\WindowsApps\AD2F1837.myHP_28.52349.1300.0_x64__v10z8vjag6ke6\win32\DesktopExtension.exe"" Here is an example of the log/event: LogName=Security EventCode=4673 EventType=0 ComputerName=********* SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=10115718 Keywords=Audit Failure TaskCategory=Sensitive Privilege Use OpCode=Info Message=A privileged service was called.   Subject: Security ID: ***************** Account Name: **************** Account Domain: *********** Logon ID: ****************   Service: Server: Security Service Name: -   Process: Process ID: 0x6604 Process Name: C:\Program Files\WindowsApps\AD2F1837.myHP_28.52349.1300.0_x64__v10z8vjag6ke6\win32\DesktopExtension.exe   Service Request Information: Any tips?
Assuming that your events have proper timestamps extracted to the _time field you should be able to do this.     source="/apps/WebMethods/IntegrationServer/instances/default/logs/DFO.log" ... See more...
Assuming that your events have proper timestamps extracted to the _time field you should be able to do this.     source="/apps/WebMethods/IntegrationServer/instances/default/logs/DFO.log" | timechart limit=30 span=1d count as count by DFOINTERFACE    
I am getting the count of each interface, but I need it date wise as example below : please help to modify my query
What do you mean by "calls"?  If you mean API calls, there is no limit I know of. Data retrieval is not limited by time period.  Query results are limited in the amount of disk space they can use, w... See more...
What do you mean by "calls"?  If you mean API calls, there is no limit I know of. Data retrieval is not limited by time period.  Query results are limited in the amount of disk space they can use, with each role having its own configurable limit (100MB is the default).  Once the limit is reached, old jobs must be deleted to free up disk space. Data ingestion is limited only by the power of the indexer(s).  The I/O rate of the storage system is a key factor, however.  HEC inputs tend to be faster, but have a limit of 1MB per transmission. Data loss is possible a number of ways.  For example, if an indexer goes down and the sender does not retry the transmission then data could be lost.  We'll need to know more specifics about your environment to discuss other ways data could be lost.
try this instead   index="aws_cloud" eventName IN ("value1", "value2", "value3")   I believe the format you posted is searching eventName="value1" OR any raw log containing the strings "value... See more...
try this instead   index="aws_cloud" eventName IN ("value1", "value2", "value3")   I believe the format you posted is searching eventName="value1" OR any raw log containing the strings "value2" OR "value3" even if "value2" OR "value3" isn't the actual value of eventName for that particular event.
one for the search query  from splunk AWS  index="aws_cloud" | search eventname="value1" OR "value2" OR "value3"  The above search query is giving the events for the all the above searched one also... See more...
one for the search query  from splunk AWS  index="aws_cloud" | search eventname="value1" OR "value2" OR "value3"  The above search query is giving the events for the all the above searched one also giving one more value which didn't searched  eventName: LookupEvents ==> getting this field and value which didn't search