All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I had the same problem on a UF, checking the sourcetype props I noticed that there were magic 6 on the agent. After deleting them, the collection works again.
Hello deepakc,   Thank you very much for this information. This forum is great. Kudos to you for helping me understanding the "internals" of Splunk,   eholz1
I set the alart to High and security Domaiin = Network, but it appears to me in the Incident Review interface that it is low and security Domaiin = threat, and every event is classified like this, as... See more...
I set the alart to High and security Domaiin = Network, but it appears to me in the Incident Review interface that it is low and security Domaiin = threat, and every event is classified like this, as shown in the attached images.     
I set the alart to High and security Domaiin = Network, but it appears to me in the Incident Review interface that it is low and security Domaiin = threat, and every event is classified like this, as... See more...
I set the alart to High and security Domaiin = Network, but it appears to me in the Incident Review interface that it is low and security Domaiin = threat, and every event is classified like this, as shown in the attached images.     
Loading the data into splunk using Add tdata settings only. please tell me the exact time need to be configuration.      
Hi @bowesmana  Your solution worked and you provided better example than Splunk documentation I appreciate your help. Thanks I thought I used one field on my mvfilter, which is fullcode...  I ... See more...
Hi @bowesmana  Your solution worked and you provided better example than Splunk documentation I appreciate your help. Thanks I thought I used one field on my mvfilter, which is fullcode...  I guess partialcode is considered  the second field.. | eval fullcode2=mvfilter(match(fullcode,partialcode))
Hi @vijreddy30 , only fo test, load this file using the [Settings > Add Data] feature. In this way, you can find (and save) the best sourcetype for your events (e.g. I see that you have a wrong tim... See more...
Hi @vijreddy30 , only fo test, load this file using the [Settings > Add Data] feature. In this way, you can find (and save) the best sourcetype for your events (e.g. I see that you have a wrong timestamp). Ciao. Giuseppe
Hi team, Upload the CSV file into Splunk, In CSV file form 47th row to 7th row into single event, written configuration  but single event is not working. please find the below screen shot   From  ... See more...
Hi team, Upload the CSV file into Splunk, In CSV file form 47th row to 7th row into single event, written configuration  but single event is not working. please find the below screen shot   From   To  7th Row csv file   the above screen shot after loaded the splunk , it's not single event for 47th row 7 row. please help configuration above.
That's what I was thinking as well. But when I go to the Distributed Search, I get this message, like i cant add anything   And for ONE SH, that is def allowed with the free license. Ive reache... See more...
That's what I was thinking as well. But when I go to the Distributed Search, I get this message, like i cant add anything   And for ONE SH, that is def allowed with the free license. Ive reached out to our splunk rep to ask about the license.    Thanks for any help
Hi @tuts , probably you have to tune your Correlation Search, but this seems to be a different question. Ciao. Giuseppe
Hi @rdhdr , sorry, when I copied your conditions I forgot to use a larger time! Anyway, let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao an... See more...
Hi @rdhdr , sorry, when I copied your conditions I forgot to use a larger time! Anyway, let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi, Following the official instructions https://apps.splunk.com/apps/id/Splunk_TA_microsoft_sysmon ,  Splunk Add-on for Sysmon 4.0.0 I just deployed the addon for sysmon in my indexer, search head... See more...
Hi, Following the official instructions https://apps.splunk.com/apps/id/Splunk_TA_microsoft_sysmon ,  Splunk Add-on for Sysmon 4.0.0 I just deployed the addon for sysmon in my indexer, search head and deployment servers so I started to collect sysmon logs. I am running Sysmon 15.14 on the endpoints. The logs started to flow into splunk but when I do searches on the index I constantly receive the following error: [indexer.mydomain.es, mysearchhead.mydomain.es] Could not load lookup=LOOKUP-eventcode I read the information in the https://docs.splunk.com/Documentation/AddOns/released/MSSysmon/Lookups but I couldnt find the root cause. The csv are in the path indicated in the documentation. Any suggestion? many thanks      
Thanks for the input, Giuseppe. I have not considered a max time between START and END events. I may need to think about that requirement. I notice that you put  earliest==-15m AND latest==-5m at ... See more...
Thanks for the input, Giuseppe. I have not considered a max time between START and END events. I may need to think about that requirement. I notice that you put  earliest==-15m AND latest==-5m at the start of the query. It seems to me that this would check whether both START and END events are > 5 minutes old,  which would be subject to the same issue I have today, in which the alert fires between START and END events. What I think I need is to find a START event > 5 minutes old, with a corresponding END event of any age. Cheers, David
Hi @rdhdr , is there a wanted max time between the two events?  if yes, I'd use this: index=indxtst earliest==-15m AND latest==-5m | eval stat=case(EVENT=="START","START",EVENT=="END","END") | sta... See more...
Hi @rdhdr , is there a wanted max time between the two events?  if yes, I'd use this: index=indxtst earliest==-15m AND latest==-5m | eval stat=case(EVENT=="START","START",EVENT=="END","END") | stats dc(stat) as dc_stat earliest(eval(EVENT=="START")) AS earliest latest(eval(EVENT=="END")) AS latest values(source) AS source values(EVENT_TYPE) AS EVENT_TYPE values(EVENT_SUBTYPE) AS EVENT_SUBTYPE values(EVENT) AS EVENT by UID | where (dc_stat=1 AND stat=START) OR latest-earliest>=600 | eval earliest=straftime(earliest,"%Y-%m-%d %H:%M:%S"), latest=if(isnull(latest),"No END event",straftime(latest,"%Y-%m-%d %H:%M:%S")) | stats table _time source EVENT_TYPE EVENT_SUBTYPE UID EVENT C iao. Giuseppe
Obviously it depends on the types of logs you are monitoring: - if its static files, UF/HF will save a checkpoint of where they stop reading, and will continue whenever you start them again - if is... See more...
Obviously it depends on the types of logs you are monitoring: - if its static files, UF/HF will save a checkpoint of where they stop reading, and will continue whenever you start them again - if is tcp/udp or syslog-like, you need to adopt other strategies like setting up a distributed Splunk environment with a cluster of Indexers or a Syslog server to receive the tcp/udp logs and write them to files.
Hello, I have programs which write status events to Splunk. At the beginning they write EVENT=START and at the end, they write EVENT=END, both with a matching UID. I have created an alert which monit... See more...
Hello, I have programs which write status events to Splunk. At the beginning they write EVENT=START and at the end, they write EVENT=END, both with a matching UID. I have created an alert which monitors for a START event without a corresponding END event, in order to find when a program may terminate abruptly. The alert is:   index=indxtst | table _time source EVENT_TYPE EVENT_SUBTYPE UID EVENT | eval stat=case(EVENT=="START","START",EVENT=="END","END") | eventstats dc(stat) as dc_stat by UID | search dc_stat=1 AND stat=START   This alert works fine, except sometimes it catches it while the program is running and simply hasn't written an END event yet. To fix this, I would like to add a delay, but that is not working.    index=indxtst | table _time source EVENT_TYPE EVENT_SUBTYPE UID EVENT | eval stat=case(EVENT=="START","START",EVENT=="END","END") | eventstats dc(stat) as dc_stat by UID | search dc_stat=1 AND stat=START AND earliest==-15m AND latest==-5m    This pulls back no records at all, even when appropriate testing data is created. What am I doing wrong?
@deepakc  - This works - thank you!
It sounds like you have: 1. You have a SH (Can't Search Data) 2. You have an Indexer 3. A UF which is sending eventgen data to the indexer to your index and you have verified this is working and... See more...
It sounds like you have: 1. You have a SH (Can't Search Data) 2. You have an Indexer 3. A UF which is sending eventgen data to the indexer to your index and you have verified this is working and can see data via CLI I suspect. 4. The SH is also acting a License Manager (Therefore the indexer must point to the License manager) Try the below steps and see if that fixes it. #Add the Indexer to your SH On the SH via the GUI Go to Settings- Distributed search » Search peers » Add new Normally its something like https://MY_INDEXER:8089 Add your admin and password Restart Splunk #Add the Indexer to the Licence Manager as a Licence Peer From the Indexer GUI > Settings > Licensing > Change to Peer Point to the Licence Manager https://MY_LICENCE_MANAGER:8089 (This is also your SH) Restart Splunk
index=XXX sourcetype=XXX [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=inner host [search `gold_mpstat` OR `silv... See more...
index=XXX sourcetype=XXX [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=inner host [search `gold_mpstat` OR `silver_mpstat` OR `platinum_mpstat` OR `palladium_mpstat` [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |stats max(eval(id+1)) as cores by host] |eval pct_CPU = round(total_cpu/cores,2) |stats max(total_cpu) as total_cpu, max(pct_CPU) as "CPU %" by user,host,cores |table host user cores total_cpu,"CPU %" |sort - "CPU %"|head 10   If you can look at the above screenshot, from the second column we have ADS-IDs and service-IDS mostly end up with s,g,p according to our environments like silver, gold and platinum. We have ADS-IDS in |  bd_users_hierarchy.csv lookup file, please check below screenshot.(Note: for security reasons, have to grayed out email addresses. And service-IDS are in the below index, please check below screenshot index = imdc_ops_13m sourcetype = usecase_contact app_id="*" | dedup app_id | table _time app_id app_owner app_team_dl     I was using subsearch using join but not successful. Any help is appreciated.  
Hi, I started using tags by tagging my hosts with the environment they are in and the service the host. Using these tags in log/event indices works perfectly well, but I am not able to filter by tag... See more...
Hi, I started using tags by tagging my hosts with the environment they are in and the service the host. Using these tags in log/event indices works perfectly well, but I am not able to filter by tags in mstats. I tried many variations of "WHERE tag=env.prod" or "WHERE "tag::host"="env.prod" but none return any results. I checked that these tags really are there with mpreview which shows all the tags on the specific hosts and I also was able to filter with a small workaround using the tags command:   | mstats rate(os.unix.nmon.storage.diskread) AS read rate(os.unix.nmon.storage.diskwrite) AS write WHERE `my-metric-indizes` AND (host=*) BY host span=5m | tags | WHERE "service.vault" IN (tag) AND "env.prod" in (tag) | stats sum(read) AS read, sum(write) AS write by _time,host | timechart max(read) as read, max(write) as write bins=1000 by host   Is there a way to filter by a tag directly in mstats? The workaround is not very performance friendly...