All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That's what I was thinking as well. But when I go to the Distributed Search, I get this message, like i cant add anything   And for ONE SH, that is def allowed with the free license. Ive reache... See more...
That's what I was thinking as well. But when I go to the Distributed Search, I get this message, like i cant add anything   And for ONE SH, that is def allowed with the free license. Ive reached out to our splunk rep to ask about the license.    Thanks for any help
Hi @tuts , probably you have to tune your Correlation Search, but this seems to be a different question. Ciao. Giuseppe
Hi @rdhdr , sorry, when I copied your conditions I forgot to use a larger time! Anyway, let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao an... See more...
Hi @rdhdr , sorry, when I copied your conditions I forgot to use a larger time! Anyway, let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi, Following the official instructions https://apps.splunk.com/apps/id/Splunk_TA_microsoft_sysmon ,  Splunk Add-on for Sysmon 4.0.0 I just deployed the addon for sysmon in my indexer, search head... See more...
Hi, Following the official instructions https://apps.splunk.com/apps/id/Splunk_TA_microsoft_sysmon ,  Splunk Add-on for Sysmon 4.0.0 I just deployed the addon for sysmon in my indexer, search head and deployment servers so I started to collect sysmon logs. I am running Sysmon 15.14 on the endpoints. The logs started to flow into splunk but when I do searches on the index I constantly receive the following error: [indexer.mydomain.es, mysearchhead.mydomain.es] Could not load lookup=LOOKUP-eventcode I read the information in the https://docs.splunk.com/Documentation/AddOns/released/MSSysmon/Lookups but I couldnt find the root cause. The csv are in the path indicated in the documentation. Any suggestion? many thanks      
Thanks for the input, Giuseppe. I have not considered a max time between START and END events. I may need to think about that requirement. I notice that you put  earliest==-15m AND latest==-5m at ... See more...
Thanks for the input, Giuseppe. I have not considered a max time between START and END events. I may need to think about that requirement. I notice that you put  earliest==-15m AND latest==-5m at the start of the query. It seems to me that this would check whether both START and END events are > 5 minutes old,  which would be subject to the same issue I have today, in which the alert fires between START and END events. What I think I need is to find a START event > 5 minutes old, with a corresponding END event of any age. Cheers, David
Hi @rdhdr , is there a wanted max time between the two events?  if yes, I'd use this: index=indxtst earliest==-15m AND latest==-5m | eval stat=case(EVENT=="START","START",EVENT=="END","END") | sta... See more...
Hi @rdhdr , is there a wanted max time between the two events?  if yes, I'd use this: index=indxtst earliest==-15m AND latest==-5m | eval stat=case(EVENT=="START","START",EVENT=="END","END") | stats dc(stat) as dc_stat earliest(eval(EVENT=="START")) AS earliest latest(eval(EVENT=="END")) AS latest values(source) AS source values(EVENT_TYPE) AS EVENT_TYPE values(EVENT_SUBTYPE) AS EVENT_SUBTYPE values(EVENT) AS EVENT by UID | where (dc_stat=1 AND stat=START) OR latest-earliest>=600 | eval earliest=straftime(earliest,"%Y-%m-%d %H:%M:%S"), latest=if(isnull(latest),"No END event",straftime(latest,"%Y-%m-%d %H:%M:%S")) | stats table _time source EVENT_TYPE EVENT_SUBTYPE UID EVENT C iao. Giuseppe
Obviously it depends on the types of logs you are monitoring: - if its static files, UF/HF will save a checkpoint of where they stop reading, and will continue whenever you start them again - if is... See more...
Obviously it depends on the types of logs you are monitoring: - if its static files, UF/HF will save a checkpoint of where they stop reading, and will continue whenever you start them again - if is tcp/udp or syslog-like, you need to adopt other strategies like setting up a distributed Splunk environment with a cluster of Indexers or a Syslog server to receive the tcp/udp logs and write them to files.
Hello, I have programs which write status events to Splunk. At the beginning they write EVENT=START and at the end, they write EVENT=END, both with a matching UID. I have created an alert which monit... See more...
Hello, I have programs which write status events to Splunk. At the beginning they write EVENT=START and at the end, they write EVENT=END, both with a matching UID. I have created an alert which monitors for a START event without a corresponding END event, in order to find when a program may terminate abruptly. The alert is:   index=indxtst | table _time source EVENT_TYPE EVENT_SUBTYPE UID EVENT | eval stat=case(EVENT=="START","START",EVENT=="END","END") | eventstats dc(stat) as dc_stat by UID | search dc_stat=1 AND stat=START   This alert works fine, except sometimes it catches it while the program is running and simply hasn't written an END event yet. To fix this, I would like to add a delay, but that is not working.    index=indxtst | table _time source EVENT_TYPE EVENT_SUBTYPE UID EVENT | eval stat=case(EVENT=="START","START",EVENT=="END","END") | eventstats dc(stat) as dc_stat by UID | search dc_stat=1 AND stat=START AND earliest==-15m AND latest==-5m    This pulls back no records at all, even when appropriate testing data is created. What am I doing wrong?
@deepakc  - This works - thank you!
It sounds like you have: 1. You have a SH (Can't Search Data) 2. You have an Indexer 3. A UF which is sending eventgen data to the indexer to your index and you have verified this is working and... See more...
It sounds like you have: 1. You have a SH (Can't Search Data) 2. You have an Indexer 3. A UF which is sending eventgen data to the indexer to your index and you have verified this is working and can see data via CLI I suspect. 4. The SH is also acting a License Manager (Therefore the indexer must point to the License manager) Try the below steps and see if that fixes it. #Add the Indexer to your SH On the SH via the GUI Go to Settings- Distributed search » Search peers » Add new Normally its something like https://MY_INDEXER:8089 Add your admin and password Restart Splunk #Add the Indexer to the Licence Manager as a Licence Peer From the Indexer GUI > Settings > Licensing > Change to Peer Point to the Licence Manager https://MY_LICENCE_MANAGER:8089 (This is also your SH) Restart Splunk
index=XXX sourcetype=XXX [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=inner host [search `gold_mpstat` OR `silv... See more...
index=XXX sourcetype=XXX [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=inner host [search `gold_mpstat` OR `silver_mpstat` OR `platinum_mpstat` OR `palladium_mpstat` [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |stats max(eval(id+1)) as cores by host] |eval pct_CPU = round(total_cpu/cores,2) |stats max(total_cpu) as total_cpu, max(pct_CPU) as "CPU %" by user,host,cores |table host user cores total_cpu,"CPU %" |sort - "CPU %"|head 10   If you can look at the above screenshot, from the second column we have ADS-IDs and service-IDS mostly end up with s,g,p according to our environments like silver, gold and platinum. We have ADS-IDS in |  bd_users_hierarchy.csv lookup file, please check below screenshot.(Note: for security reasons, have to grayed out email addresses. And service-IDS are in the below index, please check below screenshot index = imdc_ops_13m sourcetype = usecase_contact app_id="*" | dedup app_id | table _time app_id app_owner app_team_dl     I was using subsearch using join but not successful. Any help is appreciated.  
Hi, I started using tags by tagging my hosts with the environment they are in and the service the host. Using these tags in log/event indices works perfectly well, but I am not able to filter by tag... See more...
Hi, I started using tags by tagging my hosts with the environment they are in and the service the host. Using these tags in log/event indices works perfectly well, but I am not able to filter by tags in mstats. I tried many variations of "WHERE tag=env.prod" or "WHERE "tag::host"="env.prod" but none return any results. I checked that these tags really are there with mpreview which shows all the tags on the specific hosts and I also was able to filter with a small workaround using the tags command:   | mstats rate(os.unix.nmon.storage.diskread) AS read rate(os.unix.nmon.storage.diskwrite) AS write WHERE `my-metric-indizes` AND (host=*) BY host span=5m | tags | WHERE "service.vault" IN (tag) AND "env.prod" in (tag) | stats sum(read) AS read, sum(write) AS write by _time,host | timechart max(read) as read, max(write) as write bins=1000 by host   Is there a way to filter by a tag directly in mstats? The workaround is not very performance friendly...
I have many dashboards that are already in the Classical Dashboard format. These have the source code in the form of an XML. I made a new dashboard through dashboard studio. I wish to migrate this da... See more...
I have many dashboards that are already in the Classical Dashboard format. These have the source code in the form of an XML. I made a new dashboard through dashboard studio. I wish to migrate this dashboard to the classical Dashboard format. I want to make my JSON based dashboard to a simple XML based dashboard. I tried researching and surfing the web but I only got resources for migration from classical dashboard to the new format. Could someone please help me with this. TIA.
Replace with this and see if that gives you the results. | spath output=eventType path=event | spath output=agreementId path=agreement.id | spath output=agreementStatus path=agreement.status | s... See more...
Replace with this and see if that gives you the results. | spath output=eventType path=event | spath output=agreementId path=agreement.id | spath output=agreementStatus path=agreement.status | spath output=participantUserEmail path=participantUserEmail | spath output=participantSets path=agreement.participantSetsInfo.participantSets{} | mvexpand participantSets | spath input=participantSets output=memberInfos path=memberInfos{} | mvexpand memberInfos | spath input=memberInfos path=email output=memberEmail | spath input=memberInfos path=status output=memberStatus | table _time, agreementId, eventType, agreementStatus, participantUserEmail, memberEmail, memberStatus  
Hello , Thanks for your response, i would like to understand how would stopping UF and HF will prevent log loss? Waiting for your response.   Regards, Satyam
In terms of how Splunk determines the iowait stats  Splunk in the background uses REST API for these checks it runs every so often (can't remember the exact times) but collects at regular intervals ... See more...
In terms of how Splunk determines the iowait stats  Splunk in the background uses REST API for these checks it runs every so often (can't remember the exact times) but collects at regular intervals built in Splunk #This will shows the various resources on the target Splunk instance (local in this case)  | rest splunk_server=local /services/server/status/resource-usage/ #this shows the iowait stats on the target splunk instance (local in this case)  | rest splunk_server=local /services/server/status/resource-usage/iowait   
Hello, my graphs in Spluk are becoming very many over time. I would therefore like to build a kind of accordion to be able to expand and collapse the individual areas. Can someone please tell me ho... See more...
Hello, my graphs in Spluk are becoming very many over time. I would therefore like to build a kind of accordion to be able to expand and collapse the individual areas. Can someone please tell me how to do this? Best regards Alex
Hi, Single Splunk instance. I searched for that in Splunk, a couple of results from metrics.log, but nothing came out as Warning. The log_level is INFO for all.
Eventtypes are for search specific events/data your interested in (quick way to get some results from data that has already been indexed.  1. If you are only interested in some specific eventtypes, ... See more...
Eventtypes are for search specific events/data your interested in (quick way to get some results from data that has already been indexed.  1. If you are only interested in some specific eventtypes, and want to discard the rest, you could copy each of the eventtypes stanzas names into the /local/eventtypes.conf and disable them, but not sure why you want to do that as many of these also use tags for future use case such as Splunk Data models etc.  2. If you want to tune some of these, by adding your index name, then also do that into the local/eventtypes.conf Example disable an eventype /local/eventtypes.conf  [windows_event_signature] disabled = [1|0] (1 = disabled - 0 = enabled) or tune an eventtype with my index example /local/eventtypes.conf  [windows_event_signature] search = index=my_windows_index sourcetype=WinEventLog OR sourcetype=XmlWinEventLog OR sourcetype=WMI:WinEventLog:System OR sourcetype=WMI:WinEventLog:Security OR sourcetype=WMI:WinEventLog:Application OR sourcetype=wineventlog OR sourcetype=xmlwineventlog More on eventtypes concepts  https://docs.splunk.com/Documentation/Splunk/9.2.1/Knowledge/Abouteventtypes 
After upgrading Splunk Universal Forwarders from version 8.1.x to 9.2.x on Windows machines in a distributed environment, my question is: Is it mandatory for both the old and new services to run simu... See more...
After upgrading Splunk Universal Forwarders from version 8.1.x to 9.2.x on Windows machines in a distributed environment, my question is: Is it mandatory for both the old and new services to run simultaneously (in parallel), or should only the new version be running ?   also the old version, must be deleted or not ?