All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I want to know how I can detect if someone alter data in my databases in SQL Server. Also  can I do it with DB Connect App. Thanks.
Hello everyone, I try to "ADD DATA" and specifically add the file "Microsoft-Windows-Windows Defender% 4Operational.evtx", but always Splunk fails to parse it and displays unreadable data. My goal ... See more...
Hello everyone, I try to "ADD DATA" and specifically add the file "Microsoft-Windows-Windows Defender% 4Operational.evtx", but always Splunk fails to parse it and displays unreadable data. My goal is to monitor my Windows Defender logs, so I tried to set the source type to "preprocess-winevt" as suggested in one article, but the result of this was quite strange, and the data, as you can see in the image, was neat but on the other hand, the data was not being parsed eventually.                   Also, I do not understand why Splunk manages to parse ".evtx" files such as "application, security and more..." but on the other hand, can not parse the ".evtx" file from the same directory and machine? -- What am I doing wrong? I mention that I want to get the file data of: "C:\Windows\System32\winevt\Logs\Microsoft-Windows-Windows Defender% 4Operational.evtx"  
Hi, I created a data model and the searches were working previously but now it keeps failing and I don't know why. Is there something wrong with my search?
Hi, I'm trying to create an eval expression in my data model which is based on _time. Can you please advise on what I'm doing wrong?
Hi Team, We are facing an issue to create 15 days free trial account. When we filled required the details and clicked Get started button. we are getting the below windows, Why am getting this erro... See more...
Hi Team, We are facing an issue to create 15 days free trial account. When we filled required the details and clicked Get started button. we are getting the below windows, Why am getting this error, what we missed, Can you please help me to solve this ? NOTE: Post title edited for clarity. Screenshot PII redacted. — Claudia Landivar, Community Manager
Hi, I'm trying to round the average of my response_time but still getting undesirable results (all the decimal places). Can someone advise what the correct format is? stats avg(eval(round((respon... See more...
Hi, I'm trying to round the average of my response_time but still getting undesirable results (all the decimal places). Can someone advise what the correct format is? stats avg(eval(round((response_time),2))) as avg_response_time
The following example   | makeresults | eval FilePath="\\Temp.exe" | where match(FilePath, "(?i)\\Temp\.exe$")   Creates a field FilePath with the value \Temp.exe So, to match that, I am escapin... See more...
The following example   | makeresults | eval FilePath="\\Temp.exe" | where match(FilePath, "(?i)\\Temp\.exe$")   Creates a field FilePath with the value \Temp.exe So, to match that, I am escaping the single slash with 2 slashes in the match statement, but that gives the error Error in 'where' command: Regex: unrecognized character follows \ If I use \\s then the search does not fail with an error, presumably because \s is a valid character class expression, whereas \T is not. So, based on the description of the eval/replace function https://docs.splunk.com/Documentation/Splunk/8.1.3/SearchReference/TextFunctions#replace.28X.2CY.2CZ.29 if I double escape the \, so use   | makeresults | eval FilePath="\\Temp.exe" | where match(FilePath, "(?i)\\\\Temp\.exe$")   then it works, so I was looking to clarify that this is due to the same double escaping requirement ONLY for the \ character and if so, is this a general that PCRE expressions inside eval statements, that have \, will always need the 4* instance of the \  
Hello I have some event logs that show batch purchase like this:   Event 1: <BankID>Bank A</BankID> <value>5</value> <status>pending</status> <BankID>Bank B</BankID> <value>7</value> <sta... See more...
Hello I have some event logs that show batch purchase like this:   Event 1: <BankID>Bank A</BankID> <value>5</value> <status>pending</status> <BankID>Bank B</BankID> <value>7</value> <status>Success</status> Event 2: <BankID>Bank A</BankID> <value>5</value> <status>pending</status> <BankID>Bank B</BankID> <value>7</value> <status>Success</status> <BankID>Bank B</BankID> <value>9</value> <status>Success</status>   I have to make 2 tables, one has to show how much purchase in a batch for each, like this:   Name |Batch number event1|2 event2|3   The other table show their detail, search by each purchase, for example event 2   Time | Bank |Value|Status _time | BankA |5 |Pending _time | BankB |7 |Pending _time | BankB |9 |Success  
Currently my splunk search to get a list of macs of the security cameras with their respective IP is  index = dhcp 00:04:7d 10.101.240.* |table dest_mac, dest_ip |dedup dest_ip | dedup dest_mac How... See more...
Currently my splunk search to get a list of macs of the security cameras with their respective IP is  index = dhcp 00:04:7d 10.101.240.* |table dest_mac, dest_ip |dedup dest_ip | dedup dest_mac How would I get it to check for  multiple mac addresses with the same IP. This will indicate that the IP is not fixed. Thank you!
Hi I would like to remove some Data from my search (only want AreaOIC), however, I tried to do Data = AreaOIC or Data != XXXXX (*xxxx = field I would like to exclude), it still include other events w... See more...
Hi I would like to remove some Data from my search (only want AreaOIC), however, I tried to do Data = AreaOIC or Data != XXXXX (*xxxx = field I would like to exclude), it still include other events where I do not want. How should I go about doing this?   | tstats values(Arcgis.email) as email, values(Arcgis.agency) as Agency, values(Arcgis.mapservice) as Data, count(Arcgis.mapservice) as Datacount, values(Arcgis.mapfolder) as Mapfolder from datamodel=Arcgis where (host=URASVR334) groupby _time Arcgis.email | search [tstats values(Eplanner.loginname) as email from datamodel=Eplanner | table email] NOT Mapfolder=*ONETOOL* NOT Mapfolder=*GEMMA* NOT Mapfolder=*Scenarios* NOT Mapfolder=*USDashboard* NOT Mapfolder=*EPAC* NOT Mapfolder=*CLI* NOT Mapfolder=*MP14* NOT Data=*_3414* | eval Email=upper(email) | append [| tstats values(Eplanner.email) as Email, values(Eplanner.agency) as Agency, values(Eplanner.layers) as Data, count(Eplanner.layers) as Datacount from datamodel=Eplanner groupby _time Eplanner.email | eval Email=upper(Email)] | append [| tstats values(Eplanner.email) as Email, values(Eplanner.agency) as Agency, values(Eplanner.typename) as Data, count(Eplanner.typename) as Datacount from datamodel=Eplanner groupby _time Eplanner.email | eval Email=upper(Email)] | lookup eplannerusers.csv "Login Name" as Email OUTPUT "Login Name" Date Group as Group as Group Department Designation "Full Name" as Fullname | strcat Fullname " / " Department Name_Dept | search Agency=PotatoEdu Email=Potatohero@potato.edu.SG Group = PotatoEdu Data = AreaOIC | stats values(Fullname), values(Designation), values(Name_Dept), values(Group), values(Department), values(Agency), values(Mapfolder), values(Data), sum(Datacount) by Email     Email values(Fullname) values(Designation) values(Name_Dept) values(Group) values(Department) values(Agency) values(Mapfolder) values(Data) sum(Datacount) Potatohero@potato.edu.SG Ken Do Programmer Ken Do IT IT  PotatoEDU Boundaries DevtControl Planning AreaOIC Land_Ownership PlanningCommitment 52    
Hi, I need some help with the regex, Currently we have below two paths, note the naming format is different for the log files \\path\\to\\my\\app\\folder\userx-test-cpuissue.log \\path\\to\\my\\a... See more...
Hi, I need some help with the regex, Currently we have below two paths, note the naming format is different for the log files \\path\\to\\my\\app\\folder\userx-test-cpuissue.log \\path\\to\\my\\app\\folder\usery-cpuissue.log   I wrote a regex to extract user and issue, but it is not able to pick userx since the log format is different i.e. userx-test-cpuissue.log. How do i wrote a single regex which could extract both the naming formats? \\\\(?<source>\w+)-(?<issue>\w+)\.log$
Hello, I have syslog events that come with the _time either in  seconds(epoch 1620685037) OR time in microseconds from epoch(16206722176.001440).  Can a props.conf handle either of these format... See more...
Hello, I have syslog events that come with the _time either in  seconds(epoch 1620685037) OR time in microseconds from epoch(16206722176.001440).  Can a props.conf handle either of these formats and set the time stamp appropriately?  I realize there is going to be some performance impact on timestamps going all the way to micro, but the searches are not too bad and the data set if not tremendous. How should the props.conf look like? g
Is is supported to use a lookup table in searches without creating a lookup definition?
I have created a data input to run a wrapper script, which executes a python script, and gather its output. It  was working as expected during my initial tests, but seems to have failed when going fo... See more...
I have created a data input to run a wrapper script, which executes a python script, and gather its output. It  was working as expected during my initial tests, but seems to have failed when going for the full desired effect. My test script output 1 json record and worked correctly showing the events in Splunk as expected. When I got that working, I changed the command in the wrapper script to run the desired python script instead of the testing script. On a Friday, I scheduled it through the Splunk data input dashboard for the next day, Saturday at 8 am, using the cron syntax " 0 8 * * 6". When I checked the index this Monday morning, there was no new data, only that which was output by my initial testing script. My first concern was whether or not the script actually ran/executed.  Is there a way I can check to verify this? Or a way to check for data input script errors in general? Another concern of mine was the volume of data, as the test script only output 1 record and the real script should output well over 1 million records, sometimes 10x that amount. I should also add that this script makes a large amount of API calls, and I expect it should take several hours to complete. Are there any limitations in Splunk, or perhaps the server in general, that could have caused failure just due to the huge volume of records? Or perhaps the time it takes the script to finish?   Thank you.
Hi, I made a bit of a mess with the "Splunk add-on builder". I got error 500 on the "app-list" endpoint. I removed and re-installed the add-on builder (as suggested in the community) and now all of... See more...
Hi, I made a bit of a mess with the "Splunk add-on builder". I got error 500 on the "app-list" endpoint. I removed and re-installed the add-on builder (as suggested in the community) and now all of my previously created apps and add-ons are showing in the "Other apps and add-ons" instead of my managed add-on and expected tab "Created with Add-on Builder". I can not re-visit those apps, any way of "re-binding" those?
Hi everyone, I've tried two different apps to do this, and I'm stuck.  Goal: Send a private message to a slack user via an inline command so that I can pipe `| map` to the inline command to popula... See more...
Hi everyone, I've tried two different apps to do this, and I'm stuck.  Goal: Send a private message to a slack user via an inline command so that I can pipe `| map` to the inline command to populate the recipient/message.  I have tried two different apps now and neither are doing what I need. https://splunkbase.splunk.com/app/2878/#/details : this one requires an 'incoming webhook' which is statically set to a channel. Overriding it with | sendalert slack param.channel="@myOverrideUserToSendTo" param.message="test"  doesn't seem to work. https://splunkbase.splunk.com/app/3900/ : this one would work, however you cannot trigger it with an inline command, which I need in order to do my use case. If there is a way to do an inline `|sendalert`alert with this, I am unaware how to do this. 
Has any Splunk guru ever written a Splunk Maintenance plan? What would you include in it? Would you share your insight please?  
When debugging a dashboard sometimes it's helpful to be able to see the search that was ran with all the token values.  To do this, I sometimes add inline HTML to show me the query strings so that I ... See more...
When debugging a dashboard sometimes it's helpful to be able to see the search that was ran with all the token values.  To do this, I sometimes add inline HTML to show me the query strings so that I don't have to keep opening a new window to view what it is that broke the search.  However, this worked fine until I started debugging post-process searches.  Here's an example:   <input type=dropdown token="filterZ"> .... </input> <search id="base"> <query>index=abc | stats count by fieldA, fieldB</query> </search> <search base="base"> <query>| search fieldA="$filterZ$"</query> <progress> <eval token="resCount">$job.resultCount$</eval> <eval token="strSearch">$job.search$</eval> </progress> </search> <row> <panel> <html> <div> Results: $resCount$ <br/> Search: $strSearch$ </div> </html> </panel> </row>   The job.resultCount returns the correct number to the post-processing search.  I.e. if the base search returns 1000 rows, and the post search returns 50, the $resCount$ token tells me 50.  However, the $strSearch$ shows me only the query for the base search.  Is there another job property I could use which will show me the entire query including the pose-processing portion? I.e. index=abc | stats count by fieldA, fieldB | search fieldA="xyz"
I've done a fair amount of searching over the forums and am still having issues with comparing multi-value fields. I'm attempting to compare src_ip for events against MV field user_known_ip. Below... See more...
I've done a fair amount of searching over the forums and am still having issues with comparing multi-value fields. I'm attempting to compare src_ip for events against MV field user_known_ip. Below are the results I expect: src_user_ip src_ip KnownIP 192.168.1.1 192.168.1.2 192.168.1.1   Yes 192.168.1.3 172.16.1.3 No 192.168.1.4 192.168.1.5 192.168.1.6 172.16.1.4     No   My current logic pulls in the necessary events, and does a lookup for user_known_ip:   index=myindex action=user_login | lookup known_user.csv user AS src_user OUTPUT user_ip as src_user_ip | makemv delim=" " src_user_ip | mvexpand src_user_ip | eval KnownIP = if(match(src_ip, src_user_ip),"Yes", "No") | search KnownIP="No" | stats values(src_user_ip) values(src_ip) values(KnownIP) by sAMAccountName     Despite this logic, I'm still having results returned from the base search that contain src_ip values that match values in the MV field src_user_ip: src_user_ip src_ip KnownIP 192.168.1.1 192.168.1.2 192.168.1.1 172.16.1.1 172.16.1.2 No 192.168.1.3 172.16.1.3 No 192.168.1.4 192.168.1.5 192.168.1.6 172.16.1.4 172.16.1.5 192.168.1.6   No src_user_ip is multi value and will have an indeterminate number of values.
Hello! Has anyone ever successfully ingested Red Hat Satellite logs using Splunk? If not, are there any plans on making a TA for this use-case in the near future?