All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have fields aa, bb, cc, dd, hostname and sometime few filed value may be null in payload. What i want to do. if (aa, bb is not null) than lookup abc.csv name output name hostname ip if (cc, dd i... See more...
I have fields aa, bb, cc, dd, hostname and sometime few filed value may be null in payload. What i want to do. if (aa, bb is not null) than lookup abc.csv name output name hostname ip if (cc, dd is not null)  than lookup abc.csv name output name hostname ip if hostname=echo than lookup abc.csv name output name hostname ip Here is the catch, if 1st if condition is executed it should ignore 2nd & 3rd. if 2nd if statement executed than 3rd should ignored. Like wise i have to go upto 10 if condition.    
Hello, Is this still working with 4.0.2 version of the app ? I made the change, but unfortunately, nothing happens. How do you trigger the deletion? Restart Splunk? Unfortunately, I can no longer m... See more...
Hello, Is this still working with 4.0.2 version of the app ? I made the change, but unfortunately, nothing happens. How do you trigger the deletion? Restart Splunk? Unfortunately, I can no longer make any backups because I get an error saying that I have reached the size limit for backups. Thanks for your help.
Good point - not easy to use in a case statement though
Hi, I know as part of SPL-212687 this issue was fixed in 8.2.7 and 9.0+ however we have had some hosts drop their defender logs after receiving a Windows Defender update. These UFs are on version 9.0... See more...
Hi, I know as part of SPL-212687 this issue was fixed in 8.2.7 and 9.0+ however we have had some hosts drop their defender logs after receiving a Windows Defender update. These UFs are on version 9.0.2 but have still reported this issue. Is there any known problem that would cause this?
Have you tried opening the searches from the dashboard in a search window? Try building up the search line by line until you find where the data is no longer available. If it is not available from ... See more...
Have you tried opening the searches from the dashboard in a search window? Try building up the search line by line until you find where the data is no longer available. If it is not available from the start, try checking that you have the correct permissions to see the data.
Yes you can using the lookup eval command https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/ConditionalFunctions#lookup.28.26lt.3Blookup_table.26gt.3B.2C.26lt.3Bjson_object.... See more...
Yes you can using the lookup eval command https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/ConditionalFunctions#lookup.28.26lt.3Blookup_table.26gt.3B.2C.26lt.3Bjson_object.26gt.3B.2C.26lt.3Bjson_array.26gt.3B.29 It has to come from a CSV, you cannot use KV store lookups  
The simple answer is no - what is your usecase? what are you trying to achieve? There may be another way
I want to call lookup within case statement. if possible, please share sample query.
Cool - you obviously have more (unshared) knowledge about your events, which I could not easily have guessed at!
Thanks @ITWhisperer .  [^\"] worked for me.
Use a character class - it looks like this is hexadecimal with some hyphens thrown in so try [a-f0-9-]
@damodeI have the same issue in a Windows machine after changing Splunk from Local account to domain account. Did you find a solution?
Hi, I have the same problem, did you find a solution? My exe runs a scheduled task with a parameter and I don't have it as a service.
Thank you @Richfez  and @tscroggins for your solutions! For my use case the one given by @tscroggins suits best though. If I understand correctly, in order to be able to display the data this way, i... See more...
Thank you @Richfez  and @tscroggins for your solutions! For my use case the one given by @tscroggins suits best though. If I understand correctly, in order to be able to display the data this way, it is a must that the data is ordered so that there is one record per workday with each event timestamp in it's own column. In your data example, the timestamps are each in a new column. So when we would have a 100 events in 7 days, it means 100 different columns. Is it doable to manipulate the data so that for each day, the timestamps are inserted starting at event01?  Below is an example of what the data looks like, followed by how I would like it to be. Any suggestions on how to achieve this?   For others reading along, I use dashboard studio and was able to replicate the visualization by adding this to the visualization code:  "overlayFields": "event01, event02, event03, event04, event05, event06, event07, event08, event09, event10, event11, event12"  
Need help with the extraction of an alpha numeric field. E.G. : ea37c31d-f4df-48ab-b0b7-276ade5c5312
Hi ,  I have two searches joined using join command. The first search i need to run earliest=-60mins and the second search is using summary index here i need to fetch all the results in summary inde... See more...
Hi ,  I have two searches joined using join command. The first search i need to run earliest=-60mins and the second search is using summary index here i need to fetch all the results in summary index so I need to check and run summary index for "all time" . How can this be done? I am giving earliest=-60min in my first search and time range as "all time" while scheduling the report consisting of this two searches but it is not working. I have not used any time in my summary index. Search to populate my summary index index=testapp sourcetype=test_appresourceowners earliest=-24h latest=now | table sys_id, dv_manager, dv_syncenabled, dv_resource, dv_recordactive | collect addtime=false index=summaryindex source=testapp. my scheduled report search  index=index1 earlies=-60m | join host [| search index=summaryindex earliest="alltime"] | tablehost field1 field2
Hi @YJ, please try this: [monitor://C:\Program Files\somepath\folderA*\*] index=someindex sourcetype=somesourcetype Ciao. Giuseppe
Referring to the below inputs.conf for one of my windows server , as you can see, there is some whitespace at the end of the first line before the closing bracket. The "folderA" is the folder where ... See more...
Referring to the below inputs.conf for one of my windows server , as you can see, there is some whitespace at the end of the first line before the closing bracket. The "folderA" is the folder where the contents, splunk should be ingesting but is not (there are multiple log files inside). Is there a possibility that because of this whitespace Splunk may not be ingesting the logs? And if yes, any explanation on this so that we can explain/advise to the client. " [monitor://C:\Program Files\somepath\folderA ] index=someindex sourcetype=somesourcetype "
I'm trying to migrate kvstore on a v8.2 installation on Windows, but it fails early in the process. splunk migrate kvstore-storage-engine --target-engine wiredTiger ERROR: Cannot get the size of ... See more...
I'm trying to migrate kvstore on a v8.2 installation on Windows, but it fails early in the process. splunk migrate kvstore-storage-engine --target-engine wiredTiger ERROR: Cannot get the size of the KVStore folder at=E:\Splunk\Indexes\kvstore\mongo, due to reason=3 errors occurred. Description for first 3: [{operation:"failed to stat file", error:"Access is denied.", file:"E:\Splunk\Indexes\kvstore\mongo"}, {operation:"failed to stat file", error:"Access is denied.", file:"E:\Splunk\Indexes\kvstore\mongo"}, {operation:"failed to stat file", error:"Access is denied.", file:"E:\Splunk\Indexes\kvstore\mongo"}]  I've tried to do file operations on the folder and subfolders of E:\spunk\indexes\kvstore\mongo and everything seems ok. The mongod.log does not contain any rows from the migration. Any nudges in the right direction? Can I upgrade to 9.1 without migrating the store?