All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

index=*sap sourcetype=FSC* | fields _time index Eventts ID FIELD_02 FIELD_01 CODE ID FIELD* source | rex field=index "^(?<prefix>\d+_\d+)" | lookup lookup_site_ids.csv prefix as prefix output name... See more...
index=*sap sourcetype=FSC* | fields _time index Eventts ID FIELD_02 FIELD_01 CODE ID FIELD* source | rex field=index "^(?<prefix>\d+_\d+)" | lookup lookup_site_ids.csv prefix as prefix output name as Site | eval name2=substr(Site,8,4) | rex field=Eventts "(?<Date>\d{4}-\d{2}-\d{2})T(?<Time>\d{2}:\d{2}:\d{2}\.\d{3})" | fields - Eventts | eval timestamp = Date . " " . Time | eval _time = strptime(timestamp, "%Y-%m-%d %H:%M:%S.%3N") | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S.%3N"), Condition="test" | eval Stamp = strftime(_time, "%Y-%m-%d %H:%M:%S.%3N") | lookup Stoppage.csv name as Site OUTPUT Condition Time as Stamp | search Condition="Stoppage" | where Stamp = Time | eval index_time = strptime(Time, "%Y-%m-%d %H:%M:%S.%3N") | eval lookup_time = strftime(Stamp, "%Y-%m-%d %H:%M:%S.%3N") | eval CODE=if(isnull(CODE),"N/A",CODE), FIELD_01=if(isnull(FIELD_01),"N/A",FIELD_01), FIELD_02=if(isnull(FIELD_02),"N/A",FIELD_02) | lookup code_translator.csv FIELD_01 as FIELD_01 output nonzero_bits as nonzero_bits | eval nonzero_bits=if(FIELD_02="ST" AND FIELD_01="DA",nonzero_bits,"N/A") | mvexpand nonzero_bits | lookup Decomposition_File.csv Site as name2 Alarm_bit_index as nonzero_bits "Componenty_type_and_CODE" as CODE "Component_number" as ID output "Symbolic_name" as Symbolic_name Alarm_type as Alarm_type Brief_alarm_description as Brief_alarm_description Alarm_solution | eval Symbolic_name=if(FIELD_01="DA",Symbolic_name,"N/A") , Brief_alarm_description=if(FIELD_01="DA",Brief_alarm_description,"N/A") , Alarm_type=if(FIELD_01="DA",Alarm_type,"N/A") , Alarm_solution=if(FIELD_01="DA",Alarm_solution,"N/A") | fillnull value="N/A" Symbolic_name Brief_alarm_description Alarm_type | table Site Symbolic_name Brief_alarm_description Alarm_type Alarm_solution Condition Value index_time Time _time Stamp lookup_time  
thanks for the update
Hi @braxton839  If they are HF then the config should work - you'll need to restart the HFs after deploying.  == props.conf == [juniper] TRANSFORMS-aSetnull = setnull == transforms.conf == # Filte... See more...
Hi @braxton839  If they are HF then the config should work - you'll need to restart the HFs after deploying.  == props.conf == [juniper] TRANSFORMS-aSetnull = setnull == transforms.conf == # Filter juniper teardown logs to nullqueue [setnull] REGEX = RT_FLOW_SESSION_CLOSE DEST_KEY = queue FORMAT = nullQueue If its coming in with the juniper sourcetype then Im not sure why this wouldnt work. Its worth double checking for typos etc. I assume there are no other props/transforms that you have customised which alter the queue value? Ive updated the TRANSFORMS- suffix on the above from the original to see if ordering makes any difference here, this should change the precedence and be applied before other things like sourcetype renaming.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing   
Hi @heres1  After a Splunk Enterprise upgrade, if Forwarder Management is not showing any "phoning home" (i.e., connected) Universal Forwarders, you probably want to check a few things as below: C... See more...
Hi @heres1  After a Splunk Enterprise upgrade, if Forwarder Management is not showing any "phoning home" (i.e., connected) Universal Forwarders, you probably want to check a few things as below: Check that the deployment server (Forwarder Management) settings, SSL certificates, and the deploymentclient configuration on your Universal Forwarders are intact and not overwritten by the upgrade.  You mentioned restoring the /etc folder I assume this includes the Splunk Secret in etc/auth ? Ensure the deployment server port (default 8089) is up and listening, and network connectivity from forwarders to this port is working. Its worth using curl where possible from one of the UF's to verify this. Check $SPLUNK_HOME/var/log/splunk/splunkd.log on both the server and forwarders for phoning home errors.   Upgrades may overwrite configuration files or change SSL settings. If /etc was restored, verify deployment-specific files like deploymentclient.conf (on forwarders) and serverclass.conf (on the deployment server) are correct and certificates/keys are valid.  Did you just upgrade the Deployment Server, or the UFs too?  As @kiran_panchavat mentioned - there were changed in 9.2 which affect the indexes used for DS data, although you were already on 9.3.1, right? Were the clients definately showing in Forwarder Management / Agent Manager prior to the upgrade? Note: The index configuration changes (https://docs.splunk.com/Documentation/Splunk/latest/Updating/Upgradepre-9.2deploymentservers) do not affect the operation of DS, ie it will still deploy apps to the UFs, they just do not show up in the UI, so its worth confirming that they are still able to access the DS!    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@heres1  Check this  https://docs.splunk.com/Documentation/Splunk/9.4.2/Updating/Upgradepre-9.2deploymentservers  This problem can occur in Splunk Enterprise 9.2 or higher if your deployment serve... See more...
@heres1  Check this  https://docs.splunk.com/Documentation/Splunk/9.4.2/Updating/Upgradepre-9.2deploymentservers  This problem can occur in Splunk Enterprise 9.2 or higher if your deployment server forwards its internal logs to a standalone indexer or to the peer nodes of an indexer cluster. This issue can occur after an upgrade or in a new installation of 9.2 or higher. To rectify, add these settings to outputs.conf on the deployment server: [indexAndForward] index = true selectiveIndexing = true If you add these settings post-upgrade or post-installation, you might need to restart the deployment server. Indexers require new internal deployment server indexes The deployment server uses several internal indexes new in version 9.2. These indexes are included in all indexers at the 9.2 level and higher, but if you try to forward data from those indexes to a pre-9.2 indexer, problems can result. If you forward data to your indexer tier, create these new internal deployment server indexes in indexes.conf on any pre-9.2 indexers in your environment: [_dsphonehome] [_dsclient] [_dsappevent] If the indexers are at version 9.2 or higher, they are already configured with those indexes. Data does not appear when forwarded through an intermediate forwarder This problem can occur if your deployment server forwards its internal index data through an intermediate forwarder to a standalone indexer or to the peer nodes of an indexer cluster. To rectify, add this setting to outputs.conf on the intermediate forwarder: [tcpout] forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent) If you specify the configuration within a deployment app and use the deployment server to deploy the app to the affected intermediate forwarders, you can later uninstall the app when the intermediate forwarders are upgraded to a future release that incorporates the update. Deployment Server's Forwarder Management UI exhibits unexpected behaviours after upgrading to version 9.2.x. | Splunk https://community.splunk.com/t5/Splunk-Enterprise/After-upgrading-my-DS-to-Enterprise-9-2-2-clients-can-t-connect/m-p/695607 
i have upgrade Splunk enterprise 9.3.1 to 94.2, already restore /etc, but now forwarder managment dose not show any universal phoning home
Yes, this is a Heavy Forwarder (to be specific, 2 Heavy Forwarders). Juniper device events logs are sent directly to these Heavy Forwarders. According to our inputs.conf file the sourcetype for th... See more...
Yes, this is a Heavy Forwarder (to be specific, 2 Heavy Forwarders). Juniper device events logs are sent directly to these Heavy Forwarders. According to our inputs.conf file the sourcetype for these events is: juniper
@livehybrid  not sure like how its working for you as still am unable to get the results.    
Hi @wjrbrady , I'm sorry but it isn't possible to dinamically change the span value in a timechart command. You have to define a value. Ciao. Giuseppe
It works! Thank you for the solution :)! 
Please try my updated query.
Hello , I am trying to change in the search itself to change the span in timechart.  So if the hour is say greater than 7 and less than 19 make the span=10m  else 1hr example | eval hour=strftime(... See more...
Hello , I am trying to change in the search itself to change the span in timechart.  So if the hour is say greater than 7 and less than 19 make the span=10m  else 1hr example | eval hour=strftime(_time,"%H") | eval span=if(hour>=7 AND hour<19,"10m","1h") |timechart span=span count(field1) ,count(field2) by field3
Hi @livehybrid  We don't have the ITSI "internal" license listed. Also, we use NFR (Not For Resale) licences.
@livehybrid  The issue is in my query I am fetching data for last 6 months. so If someone run the query till date it will give results from December till now and also there is 0 count for some months... See more...
@livehybrid  The issue is in my query I am fetching data for last 6 months. so If someone run the query till date it will give results from December till now and also there is 0 count for some months, so it will look blank. something like this if I hardcode the months
Hi @mchoudhary  The easiest way might be to add a table on the end, something like this: | table Source Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec *  Did this answer help you? If so, please... See more...
Hi @mchoudhary  The easiest way might be to add a table on the end, something like this: | table Source Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec *  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Everyone! I wrote a search query to get the blocked count of emails for last 6months and below is my query- | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from d... See more...
Hi Everyone! I wrote a search query to get the blocked count of emails for last 6months and below is my query- | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND (Message_Log.filter.disposition="discard" OR Message_Log.filter.disposition="reject" OR Message_Log.filter.quarantine.folder="Spam*") earliest=-6mon@mon latest=now by _time | eval Source="Email" | eval Month=strftime(_time, "%b") | stats sum(Blocked) as Blocked by Source Month | eventstats sum(Blocked) as Total by Source | appendpipe [ stats values(Total) as Blocked by Source | eval Month="Total" ] | xyseries Source Month Blocked | fillnull value=0   and its output looks something like this - The only issue is in the output the month field is not chronologically sorted instead it is alphabetical. I intend to sort it chronologically. I tried with the below query as well to achieve the desired output but no go- | eval MonthNum=strftime(_time, "%Y-%m"), MonthName=strftime(_time, "%b") | stats sum(Blocked) as Blocked by Source MonthNum MonthName | eventstats sum(Blocked) as Total by Source | appendpipe [ stats values(Total) as Blocked by Source | eval MonthNum="9999-99", MonthName="Total" ] | sort MonthNum | eval Month=MonthName | table Source Month Blocked   Could someone please help here! Thanks In advance
Hi @ralphsteen  There is some free Veteran training over at https://workplus.splunk.com/veterans as part of the WorkPlus+ scheme, so you may be able to use this to get onto the Enterprise Security (... See more...
Hi @ralphsteen  There is some free Veteran training over at https://workplus.splunk.com/veterans as part of the WorkPlus+ scheme, so you may be able to use this to get onto the Enterprise Security (ES) training, however if its specifically for CompTIA Security+ then you might need to contact them through their site to see if they can determine why there is a cost showing against the training.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Is there a Special Log In for Veterans Workforce Program?    Am I currently signed in as a regular user? I signed up for the Veteran's Workforce Program a while back and thought I got a confirmation... See more...
Is there a Special Log In for Veterans Workforce Program?    Am I currently signed in as a regular user? I signed up for the Veteran's Workforce Program a while back and thought I got a confirmation but now can't find it. Under that program is there a free program for Splunk Enterprise Security?  When I find it under this login there is a price for that course. That course is pre approved by CompTIA for PDUs to renew my Security X so that's why I want to take it. Any help would be appreciated. Ralph P Steen Jr
Hi @berrybob  When testing with Curl, were you using the same Pod address as used in DSDL, or directly on the Pod IP? Are you able to hit port 5000 on the container host and reach the API within the... See more...
Hi @berrybob  When testing with Curl, were you using the same Pod address as used in DSDL, or directly on the Pod IP? Are you able to hit port 5000 on the container host and reach the API within the Pod?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
[yourSourceType] SHOULD_LINEMERGE=false LINE_BREAKER=([\S\s\n]+"predictions":\s\[\s*)|}(\s*\,\s*){|([\s\n\r]*\][\s\n\r]*}[\s\n\r]*) NO_BINARY_CHECK=true TIME_PREFIX="ds":\s" TIME_FORMAT=%Y-%m-%dT%H:%... See more...
[yourSourceType] SHOULD_LINEMERGE=false LINE_BREAKER=([\S\s\n]+"predictions":\s\[\s*)|}(\s*\,\s*){|([\s\n\r]*\][\s\n\r]*}[\s\n\r]*) NO_BINARY_CHECK=true TIME_PREFIX="ds":\s" TIME_FORMAT=%Y-%m-%dT%H:%M:%S MAX_TIMESTAMP_LOOKAHEAD=20  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing