All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have many dashboards that are already in the Classical Dashboard format. These have the source code in the form of an XML. I made a new dashboard through dashboard studio. I wish to migrate this da... See more...
I have many dashboards that are already in the Classical Dashboard format. These have the source code in the form of an XML. I made a new dashboard through dashboard studio. I wish to migrate this dashboard to the classical Dashboard format. I want to make my JSON based dashboard to a simple XML based dashboard. I tried researching and surfing the web but I only got resources for migration from classical dashboard to the new format. Could someone please help me with this. TIA.
Replace with this and see if that gives you the results. | spath output=eventType path=event | spath output=agreementId path=agreement.id | spath output=agreementStatus path=agreement.status | s... See more...
Replace with this and see if that gives you the results. | spath output=eventType path=event | spath output=agreementId path=agreement.id | spath output=agreementStatus path=agreement.status | spath output=participantUserEmail path=participantUserEmail | spath output=participantSets path=agreement.participantSetsInfo.participantSets{} | mvexpand participantSets | spath input=participantSets output=memberInfos path=memberInfos{} | mvexpand memberInfos | spath input=memberInfos path=email output=memberEmail | spath input=memberInfos path=status output=memberStatus | table _time, agreementId, eventType, agreementStatus, participantUserEmail, memberEmail, memberStatus  
Hello , Thanks for your response, i would like to understand how would stopping UF and HF will prevent log loss? Waiting for your response.   Regards, Satyam
In terms of how Splunk determines the iowait stats  Splunk in the background uses REST API for these checks it runs every so often (can't remember the exact times) but collects at regular intervals ... See more...
In terms of how Splunk determines the iowait stats  Splunk in the background uses REST API for these checks it runs every so often (can't remember the exact times) but collects at regular intervals built in Splunk #This will shows the various resources on the target Splunk instance (local in this case)  | rest splunk_server=local /services/server/status/resource-usage/ #this shows the iowait stats on the target splunk instance (local in this case)  | rest splunk_server=local /services/server/status/resource-usage/iowait   
Hello, my graphs in Spluk are becoming very many over time. I would therefore like to build a kind of accordion to be able to expand and collapse the individual areas. Can someone please tell me ho... See more...
Hello, my graphs in Spluk are becoming very many over time. I would therefore like to build a kind of accordion to be able to expand and collapse the individual areas. Can someone please tell me how to do this? Best regards Alex
Hi, Single Splunk instance. I searched for that in Splunk, a couple of results from metrics.log, but nothing came out as Warning. The log_level is INFO for all.
Eventtypes are for search specific events/data your interested in (quick way to get some results from data that has already been indexed.  1. If you are only interested in some specific eventtypes, ... See more...
Eventtypes are for search specific events/data your interested in (quick way to get some results from data that has already been indexed.  1. If you are only interested in some specific eventtypes, and want to discard the rest, you could copy each of the eventtypes stanzas names into the /local/eventtypes.conf and disable them, but not sure why you want to do that as many of these also use tags for future use case such as Splunk Data models etc.  2. If you want to tune some of these, by adding your index name, then also do that into the local/eventtypes.conf Example disable an eventype /local/eventtypes.conf  [windows_event_signature] disabled = [1|0] (1 = disabled - 0 = enabled) or tune an eventtype with my index example /local/eventtypes.conf  [windows_event_signature] search = index=my_windows_index sourcetype=WinEventLog OR sourcetype=XmlWinEventLog OR sourcetype=WMI:WinEventLog:System OR sourcetype=WMI:WinEventLog:Security OR sourcetype=WMI:WinEventLog:Application OR sourcetype=wineventlog OR sourcetype=xmlwineventlog More on eventtypes concepts  https://docs.splunk.com/Documentation/Splunk/9.2.1/Knowledge/Abouteventtypes 
After upgrading Splunk Universal Forwarders from version 8.1.x to 9.2.x on Windows machines in a distributed environment, my question is: Is it mandatory for both the old and new services to run simu... See more...
After upgrading Splunk Universal Forwarders from version 8.1.x to 9.2.x on Windows machines in a distributed environment, my question is: Is it mandatory for both the old and new services to run simultaneously (in parallel), or should only the new version be running ?   also the old version, must be deleted or not ? 
Hello, In ITSI, I would received alerts when an entity is critical or high and I cannot find how configure that On the other hand when a KPI alert is showing up I have the message "No impacted ... See more...
Hello, In ITSI, I would received alerts when an entity is critical or high and I cannot find how configure that On the other hand when a KPI alert is showing up I have the message "No impacted entities" in the episode review but there are entities in critical. thaks in advance for the help
Nobody came across this issue? 
Hi, The best way to avoid logs being lost would be to stop the UFs and HFs  before taking down the Indexers. Another way woulbe be to disable the inputs.conf, by creating/updating a configuration l... See more...
Hi, The best way to avoid logs being lost would be to stop the UFs and HFs  before taking down the Indexers. Another way woulbe be to disable the inputs.conf, by creating/updating a configuration like this: [perfmon:*] disabled = true [WinEventLog:] disabled = true [monitor://<path>] disabled = true   ------------ If this was helpful, some karma would be appreciated.
Hi, Do you have a distributed enviornment with Indexers and Search Heads, or a single Splunk instance? Try searching this in Splunk and look for error of warning messages: index=_internal "csv_dat... See more...
Hi, Do you have a distributed enviornment with Indexers and Search Heads, or a single Splunk instance? Try searching this in Splunk and look for error of warning messages: index=_internal "csv_data"    
Hi, try this sed expression instead ... | rex field=ip mode=sed "s/\b0+([0-9]+)/\1/g"   ------------ If this was helpful, some karma would be appreciated.
Hello All, We have a server on which indexer and search head deployed. furthermore we are getting logs from UF and HF's. due to some requirement we might require some downtime for the server on whic... See more...
Hello All, We have a server on which indexer and search head deployed. furthermore we are getting logs from UF and HF's. due to some requirement we might require some downtime for the server on which Indexer and search head has been deployed. Will there be any log loss due to this server downtime? If yes how long logs will be lost there? If UF cache logs locally then for how log UF cache the logs, if there any dependency on cache memory available on UF server?  
So i have three servers in this Splunk infrastructure, a SH, an Indexer, and a forwarder. I have installed the free 10gb dev license as well as the 50Gb one, and am not using clustering anywhere.  I... See more...
So i have three servers in this Splunk infrastructure, a SH, an Indexer, and a forwarder. I have installed the free 10gb dev license as well as the 50Gb one, and am not using clustering anywhere.  I have installed and followed this guide to send test data to our boxes. https://splunkbase.splunk.com/app/1924   I set the app up on the forwarder, and i can see the data in the index i created, testindex on the indexer. I can view the sample data. I cannot however, view the data from the SH.  My problem rn is I can not find what I am missing. I have looked everywhere and cant figure it out. I have confirmed my server.conf, distsearch.conf, the outputs.conf on the forwarder, I have made the pass4symm keys on all machines similar, I can ping each server from one another, so connection is good.   What else can i check? Most of the splunk docs i see are for clustered env, and am struggling to find relevant docs.    I have set the SH to be the License Master, and both machines point to the SH as License Manager, yet on the SH, I do not see any instance other than itself to be the indexer.  When i go to add a new pool, like i see it on our DMC, i can only add itself as available indexers   On our production DMC, we have all of the indexers listed. I should be seeing the indexer or something showing up somewhere within the SH, but i dont see any mention anywhere. Checking _internal Logs, I just see its own Hostname Mentioned. Im having issues figuring out where im going wrong. The SH should see the indexer based on my findings and set up.    Any help or guidance would be appreciated. Thank you.     
@bowesmana  I'm still not getting the expected result. The regex pattern should remove all leading zeros in each octet. however, in this case, it is just removing the leading zero in whichever octet ... See more...
@bowesmana  I'm still not getting the expected result. The regex pattern should remove all leading zeros in each octet. however, in this case, it is just removing the leading zero in whichever octet it finds at first place.  For example: 1) 010.1.2.3 -> it removes zero in 1st octet - > 10.1.2.3 2) 10.001.2.3 -> it removes zero in 2nd octet -> 10.1.2.3 3) 010.001.2.3 -> it removes zero in 1st octet only -> 10.001.2.3 4) 10.001.002.3 -> it removes zero in 2nd octet only -> 10.1.002.3 5) 10.1.2.003 -> it removes zero in 2nd octet only -> 10.1.2.3   Regards, Sid  
Hi bowesmana Thanks for the efforts we have data sets index=acn_lendlease_certificate_tier3_idx tower=Self_Signed_Certificate | stats latest(tower) as Tower, latest(source_host) as sourc... See more...
Hi bowesmana Thanks for the efforts we have data sets index=acn_lendlease_certificate_tier3_idx tower=Self_Signed_Certificate | stats latest(tower) as Tower, latest(source_host) as source_host , latest(metric_value) as "Days To Expire", latest(alert_value) as alert_value, latest(add_info) as "Additional Info" by instance | eval alert_value=case(alert_value==100,"Active",alert_value==300,"About to Expire", alert_value==500,"Expired") | where alert_value="About to Expire" | search Tower="*" AND alert_value="*" | sort "Days To Expire" | rename instance as "Serial Number / Server ID", Tower as "Certificate Type" , source_host as Certificate , alert_value as "Certificate Status" here i am trying to add one more coulmn called incident  To extract the incident details with respect to certificate values If inc is available , then it should display numbers, orelse null To extract the INC, using the below query   index=acn_ac_snow_ticket_idx code_message=create uid="*Saml : Days to expire*" OR uid="*Self_Signed : Days to expire*" OR uid="*CA : Days to expire*" OR uid="*Entrust : Days to expire*" | rex field=_raw "\"(?<INC>INC\d+)," | rex field=uid "(?i)^(?P<source_host>.+?)__" | table INC uid log_description source_host | dedup INC uid log_description source_host | rename INC as "Ticket_Number"  
Hello team, For context this is a splunk cloud environment with an es and an ad hoc search head. Today I tried to change an http event collector input from sourcetype _json to wiz. The wiz events ... See more...
Hello team, For context this is a splunk cloud environment with an es and an ad hoc search head. Today I tried to change an http event collector input from sourcetype _json to wiz. The wiz events are json events with a date midway through the event. Sample event (heavily obfuscated as it is company data): {   "event": {     "trigger": {       "source": "CLOUD_EVENTS",       "type": "Created",       "ruleId": "<rule_id>",       "ruleName": "WIZ-Splunk Integration"     },     "event": {       "name": "<eventname>",       "eventURL": "<url>",       "cloudPlatform": "AWS",       "timestamp": "2024-06-12T03:01:18Z",       "source": "<amazon source>",       "category": "List",       "path": null,       "actor": {         "name": "<account name>",         "type": "SERVICE_ACCOUNT",         "IP": "<FQDN>",         "actingAs": {           "name": "<role_name>",           "providerUniqueId": "<UniqID",           "type": "SERVICE_ACCOUNT",            "rawlog": {"addendum":null,"additionalEventData":null,"apiVersion":null,"awsRegion":"us-east-1","errorCode":null,"errorMessage":null,"eventCategory":"<event_category>","eventID":"<event_id>","eventName":"<event_name>","eventSource":"<amazon_link>","eventTime":"2024-06-12T03:01:18Z","eventType":"<type of event>","eventVersion":"<version number>","managementEvent":true,"readOnly":true,"recipientAccountId":"<account ID>","requestID":"<request_id>","requestParameters":{"DescribeVpcEndpointsRequest":{"VpcEndpointId":{"content":"<VPCENDPOINTID>","tag":1}}},"resources":null,"responseElements":null,"serviceEventDetails":null,"sessionCredentialFromConsole":null,"sharedEventID":null,"sourceIPAddress":"<source ip>","tlsDetails":null,"userAgent":"<user agent>","userIdentity":{"accountId":"<account ID>","arn":"<ARN>","invokedBy":"<USER>","principalId":"<principal ID>","sessionContext":{"attributes":{"creationDate":"2024-06-12T03:01:17Z","mfaAuthenticated":"false"},"sessionIssuer":{"accountId":"<account ID>","arn":"<ARN>","principalId":"<principal ID>","type":"Role","userName":"<role name>"}},"type":"AssumedRole"},"vpcEndpointId":null}         }       },       "subjectResource": {         "name": "",         "type": "",         "providerUniqueId": "",         "externalId": "<external ID>",         "region": "us-east-1",         "kubernetesCluster": "",         "kubernetesNamespace": "",         "account": {"externalId":"<external ID>","id":"<ID>"}       },       "matchedRules": " ruleId: ; ruleName: <RULE NAME> "     }   } } To accomplish the sourcetype name change I cloned the current configuration for _json under app search which was as follows: CHARSET = UTF-8 DATETIME_CONFIG= INDEXED_EXTRACTIONS=json KV_MODE=none SHOULD_LINEMERGE=true category=structured disabled=false pulldown_type=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true   This cloned the config successfully but notably put it under app 000-self-service rather than search. I then set the input to the new sourcetype wiz.   Following this change some events began breaking incorrectly at the first timestamp in the log, a behavior not previously observed on sourcetype _json which had the same config.   Sample broken event: Event1: {   "event": {     "trigger": {       "source": "CLOUD_EVENTS",       "type": "Created",       "ruleId": "<rule_id>",       "ruleName": "WIZ-Splunk Integration"     },     "event": {       "name": "<eventname>",       "eventURL": "<url>",       "cloudPlatform": "AWS",     Event 2:       "timestamp": "2024-06-12T03:01:18Z",       "source": "<amazon source>",       "category": "List",       "path": null,       "actor": {         "name": "<account name>",         "type": "SERVICE_ACCOUNT",         "IP": "<FQDN>",         "actingAs": {           "name": "<role_name>",           "providerUniqueId": "<UniqID",           "type": "SERVICE_ACCOUNT",            "rawlog": {"addendum":null,"additionalEventData":null,"apiVersion":null,"awsRegion":"us-east-1","errorCode":null,"errorMessage":null,"eventCategory":"<event_category>","eventID":"<event_id>","eventName":"<event_name>","eventSource":"<amazon_link>","eventTime":"2024-06-12T03:01:18Z","eventType":"<type of event>","eventVersion":"<version number>","managementEvent":true,"readOnly":true,"recipientAccountId":"<account ID>","requestID":"<request_id>","requestParameters":{"DescribeVpcEndpointsRequest":{"VpcEndpointId":{"content":"<VPCENDPOINTID>","tag":1}}},"resources":null,"responseElements":null,"serviceEventDetails":null,"sessionCredentialFromConsole":null,"sharedEventID":null,"sourceIPAddress":"<source ip>","tlsDetails":null,"userAgent":"<user agent>","userIdentity":{"accountId":"<account ID>","arn":"<ARN>","invokedBy":"<USER>","principalId":"<principal ID>","sessionContext":{"attributes":{"creationDate":"2024-06-12T03:01:17Z","mfaAuthenticated":"false"},"sessionIssuer":{"accountId":"<account ID>","arn":"<ARN>","principalId":"<principal ID>","type":"Role","userName":"<role name>"}},"type":"AssumedRole"},"vpcEndpointId":null}         }       },       "subjectResource": {         "name": "",         "type": "",         "providerUniqueId": "",         "externalId": "<external ID>",         "region": "us-east-1",         "kubernetesCluster": "",         "kubernetesNamespace": "",         "account": {"externalId":"<external ID>","id":"<ID>"}       },       "matchedRules": " ruleId: ; ruleName: <RULE NAME> "     }   } }   This was strange behavior but likely was caused by the default setting of BREAK_ONLY_BEFORE_DATE=true   To remedy this I edited the sourcetype config for wiz by adding the following: BREAK_ONLY_BEFORE ={[\r\n]\s+\"event\"\: BREAK_ONLY_BEFORE_DATE = false Note I left the value below as True SHOULD_LINEMERGE = true   However after clicking save the following changes were made: BREAK_ONLY_BEFORE ={[\r\n]\s+\"event\"\: LINE_BREAKER = {[\r\n]\s+\"event SHOULD_LINEMERGE = false   The configuration for BREAK_ONLY_BEFORE_DATE was unable to be saved and SHOULD_LINEMERGE was unable to be set to true while BREAK_ONLY_BEFORE was present.   I tried performing this change many times over hours and tried creating unrelated sourcetypes with BREAK_ONLY_BEFORE_DATE but was unable to set this setting on splunk cloud.  In addition, any attempt to set SHOULD_LINEMERGE to true while BREAK_ONLY_BEFORE was present resulted in SHOULD_LINEMERGE being set to false and LINE_BREAKER being set to the same value as BREAK_ONLY_BEFORE Other settings were able to be set as expected.  A final note for information is timestamp was set to auto. Are these configurations invalid in general or just unable to be set in settings > sourcetypes > advanced in splunk cloud? As an additional note no settings applied were able to set the event breaking to earlier behavior and I was forced to revert the change on the input back to sourcetype _json where breaking worked as expected.  Would appreciate any answers and happy to provide more info if needed Apologies for the long read.
Your original search is rather unclear, so here is an attempt at removing the need to join, i.e. to do it the Splunk way, so it searches both data sets and creates the common fields  Your original j... See more...
Your original search is rather unclear, so here is an attempt at removing the need to join, i.e. to do it the Splunk way, so it searches both data sets and creates the common fields  Your original join attempt wanted source_host but you are splitting by ticket number, so is is possible that there can be multiple source_host per ticket number? Can there also be more than one instance per ticket number? (index=acn_ac_snow_ticket_idx code_message=create uid="*Saml : Days to expire*" OR uid="*Self_Signed : Days to expire*" OR uid="*CA : Days to expire*" OR uid="*Entrust : Days to expire*") OR (index=acn_lendlease_certificate_tier3_idx tower=*) | rex field=_raw "\"(?<INC>INC\d+)," | eval ticket=coalesce(INC, Ticket_Number) | rex field=uid "(?i)^(?P<snow_source_host>.+?)__" | eval source_host=coalesce(snow_source_host, source_host) | stats latest(tower) as Tower, latest(source_host) as source_host , latest(metric_value) as "Days To Expire", latest(alert_value) as alert_value, latest(add_info) as "Additional Info" values(instance) by ticket | eval alert_value=case(alert_value==100,"Active",alert_value==300,"About to Expire", alert_value==500,"Expired") | where alert_value="Active" | search Tower="*" AND alert_value="*" | sort "Days To Expire" | rename instance as "Serial Number / Server ID", Tower as "Certificate Type" , source_host as Certificate , alert_value as "Certificate Status" Not sure if this will give you what you want - but if not, please provide some anonymised data for each data type and show how you are trying to combine them because it's not clear from the search.
yes