All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Nobody came across this issue? 
Hi, The best way to avoid logs being lost would be to stop the UFs and HFs  before taking down the Indexers. Another way woulbe be to disable the inputs.conf, by creating/updating a configuration l... See more...
Hi, The best way to avoid logs being lost would be to stop the UFs and HFs  before taking down the Indexers. Another way woulbe be to disable the inputs.conf, by creating/updating a configuration like this: [perfmon:*] disabled = true [WinEventLog:] disabled = true [monitor://<path>] disabled = true   ------------ If this was helpful, some karma would be appreciated.
Hi, Do you have a distributed enviornment with Indexers and Search Heads, or a single Splunk instance? Try searching this in Splunk and look for error of warning messages: index=_internal "csv_dat... See more...
Hi, Do you have a distributed enviornment with Indexers and Search Heads, or a single Splunk instance? Try searching this in Splunk and look for error of warning messages: index=_internal "csv_data"    
Hi, try this sed expression instead ... | rex field=ip mode=sed "s/\b0+([0-9]+)/\1/g"   ------------ If this was helpful, some karma would be appreciated.
Hello All, We have a server on which indexer and search head deployed. furthermore we are getting logs from UF and HF's. due to some requirement we might require some downtime for the server on whic... See more...
Hello All, We have a server on which indexer and search head deployed. furthermore we are getting logs from UF and HF's. due to some requirement we might require some downtime for the server on which Indexer and search head has been deployed. Will there be any log loss due to this server downtime? If yes how long logs will be lost there? If UF cache logs locally then for how log UF cache the logs, if there any dependency on cache memory available on UF server?  
So i have three servers in this Splunk infrastructure, a SH, an Indexer, and a forwarder. I have installed the free 10gb dev license as well as the 50Gb one, and am not using clustering anywhere.  I... See more...
So i have three servers in this Splunk infrastructure, a SH, an Indexer, and a forwarder. I have installed the free 10gb dev license as well as the 50Gb one, and am not using clustering anywhere.  I have installed and followed this guide to send test data to our boxes. https://splunkbase.splunk.com/app/1924   I set the app up on the forwarder, and i can see the data in the index i created, testindex on the indexer. I can view the sample data. I cannot however, view the data from the SH.  My problem rn is I can not find what I am missing. I have looked everywhere and cant figure it out. I have confirmed my server.conf, distsearch.conf, the outputs.conf on the forwarder, I have made the pass4symm keys on all machines similar, I can ping each server from one another, so connection is good.   What else can i check? Most of the splunk docs i see are for clustered env, and am struggling to find relevant docs.    I have set the SH to be the License Master, and both machines point to the SH as License Manager, yet on the SH, I do not see any instance other than itself to be the indexer.  When i go to add a new pool, like i see it on our DMC, i can only add itself as available indexers   On our production DMC, we have all of the indexers listed. I should be seeing the indexer or something showing up somewhere within the SH, but i dont see any mention anywhere. Checking _internal Logs, I just see its own Hostname Mentioned. Im having issues figuring out where im going wrong. The SH should see the indexer based on my findings and set up.    Any help or guidance would be appreciated. Thank you.     
@bowesmana  I'm still not getting the expected result. The regex pattern should remove all leading zeros in each octet. however, in this case, it is just removing the leading zero in whichever octet ... See more...
@bowesmana  I'm still not getting the expected result. The regex pattern should remove all leading zeros in each octet. however, in this case, it is just removing the leading zero in whichever octet it finds at first place.  For example: 1) 010.1.2.3 -> it removes zero in 1st octet - > 10.1.2.3 2) 10.001.2.3 -> it removes zero in 2nd octet -> 10.1.2.3 3) 010.001.2.3 -> it removes zero in 1st octet only -> 10.001.2.3 4) 10.001.002.3 -> it removes zero in 2nd octet only -> 10.1.002.3 5) 10.1.2.003 -> it removes zero in 2nd octet only -> 10.1.2.3   Regards, Sid  
Hi bowesmana Thanks for the efforts we have data sets index=acn_lendlease_certificate_tier3_idx tower=Self_Signed_Certificate | stats latest(tower) as Tower, latest(source_host) as sourc... See more...
Hi bowesmana Thanks for the efforts we have data sets index=acn_lendlease_certificate_tier3_idx tower=Self_Signed_Certificate | stats latest(tower) as Tower, latest(source_host) as source_host , latest(metric_value) as "Days To Expire", latest(alert_value) as alert_value, latest(add_info) as "Additional Info" by instance | eval alert_value=case(alert_value==100,"Active",alert_value==300,"About to Expire", alert_value==500,"Expired") | where alert_value="About to Expire" | search Tower="*" AND alert_value="*" | sort "Days To Expire" | rename instance as "Serial Number / Server ID", Tower as "Certificate Type" , source_host as Certificate , alert_value as "Certificate Status" here i am trying to add one more coulmn called incident  To extract the incident details with respect to certificate values If inc is available , then it should display numbers, orelse null To extract the INC, using the below query   index=acn_ac_snow_ticket_idx code_message=create uid="*Saml : Days to expire*" OR uid="*Self_Signed : Days to expire*" OR uid="*CA : Days to expire*" OR uid="*Entrust : Days to expire*" | rex field=_raw "\"(?<INC>INC\d+)," | rex field=uid "(?i)^(?P<source_host>.+?)__" | table INC uid log_description source_host | dedup INC uid log_description source_host | rename INC as "Ticket_Number"  
Hello team, For context this is a splunk cloud environment with an es and an ad hoc search head. Today I tried to change an http event collector input from sourcetype _json to wiz. The wiz events ... See more...
Hello team, For context this is a splunk cloud environment with an es and an ad hoc search head. Today I tried to change an http event collector input from sourcetype _json to wiz. The wiz events are json events with a date midway through the event. Sample event (heavily obfuscated as it is company data): {   "event": {     "trigger": {       "source": "CLOUD_EVENTS",       "type": "Created",       "ruleId": "<rule_id>",       "ruleName": "WIZ-Splunk Integration"     },     "event": {       "name": "<eventname>",       "eventURL": "<url>",       "cloudPlatform": "AWS",       "timestamp": "2024-06-12T03:01:18Z",       "source": "<amazon source>",       "category": "List",       "path": null,       "actor": {         "name": "<account name>",         "type": "SERVICE_ACCOUNT",         "IP": "<FQDN>",         "actingAs": {           "name": "<role_name>",           "providerUniqueId": "<UniqID",           "type": "SERVICE_ACCOUNT",            "rawlog": {"addendum":null,"additionalEventData":null,"apiVersion":null,"awsRegion":"us-east-1","errorCode":null,"errorMessage":null,"eventCategory":"<event_category>","eventID":"<event_id>","eventName":"<event_name>","eventSource":"<amazon_link>","eventTime":"2024-06-12T03:01:18Z","eventType":"<type of event>","eventVersion":"<version number>","managementEvent":true,"readOnly":true,"recipientAccountId":"<account ID>","requestID":"<request_id>","requestParameters":{"DescribeVpcEndpointsRequest":{"VpcEndpointId":{"content":"<VPCENDPOINTID>","tag":1}}},"resources":null,"responseElements":null,"serviceEventDetails":null,"sessionCredentialFromConsole":null,"sharedEventID":null,"sourceIPAddress":"<source ip>","tlsDetails":null,"userAgent":"<user agent>","userIdentity":{"accountId":"<account ID>","arn":"<ARN>","invokedBy":"<USER>","principalId":"<principal ID>","sessionContext":{"attributes":{"creationDate":"2024-06-12T03:01:17Z","mfaAuthenticated":"false"},"sessionIssuer":{"accountId":"<account ID>","arn":"<ARN>","principalId":"<principal ID>","type":"Role","userName":"<role name>"}},"type":"AssumedRole"},"vpcEndpointId":null}         }       },       "subjectResource": {         "name": "",         "type": "",         "providerUniqueId": "",         "externalId": "<external ID>",         "region": "us-east-1",         "kubernetesCluster": "",         "kubernetesNamespace": "",         "account": {"externalId":"<external ID>","id":"<ID>"}       },       "matchedRules": " ruleId: ; ruleName: <RULE NAME> "     }   } } To accomplish the sourcetype name change I cloned the current configuration for _json under app search which was as follows: CHARSET = UTF-8 DATETIME_CONFIG= INDEXED_EXTRACTIONS=json KV_MODE=none SHOULD_LINEMERGE=true category=structured disabled=false pulldown_type=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true   This cloned the config successfully but notably put it under app 000-self-service rather than search. I then set the input to the new sourcetype wiz.   Following this change some events began breaking incorrectly at the first timestamp in the log, a behavior not previously observed on sourcetype _json which had the same config.   Sample broken event: Event1: {   "event": {     "trigger": {       "source": "CLOUD_EVENTS",       "type": "Created",       "ruleId": "<rule_id>",       "ruleName": "WIZ-Splunk Integration"     },     "event": {       "name": "<eventname>",       "eventURL": "<url>",       "cloudPlatform": "AWS",     Event 2:       "timestamp": "2024-06-12T03:01:18Z",       "source": "<amazon source>",       "category": "List",       "path": null,       "actor": {         "name": "<account name>",         "type": "SERVICE_ACCOUNT",         "IP": "<FQDN>",         "actingAs": {           "name": "<role_name>",           "providerUniqueId": "<UniqID",           "type": "SERVICE_ACCOUNT",            "rawlog": {"addendum":null,"additionalEventData":null,"apiVersion":null,"awsRegion":"us-east-1","errorCode":null,"errorMessage":null,"eventCategory":"<event_category>","eventID":"<event_id>","eventName":"<event_name>","eventSource":"<amazon_link>","eventTime":"2024-06-12T03:01:18Z","eventType":"<type of event>","eventVersion":"<version number>","managementEvent":true,"readOnly":true,"recipientAccountId":"<account ID>","requestID":"<request_id>","requestParameters":{"DescribeVpcEndpointsRequest":{"VpcEndpointId":{"content":"<VPCENDPOINTID>","tag":1}}},"resources":null,"responseElements":null,"serviceEventDetails":null,"sessionCredentialFromConsole":null,"sharedEventID":null,"sourceIPAddress":"<source ip>","tlsDetails":null,"userAgent":"<user agent>","userIdentity":{"accountId":"<account ID>","arn":"<ARN>","invokedBy":"<USER>","principalId":"<principal ID>","sessionContext":{"attributes":{"creationDate":"2024-06-12T03:01:17Z","mfaAuthenticated":"false"},"sessionIssuer":{"accountId":"<account ID>","arn":"<ARN>","principalId":"<principal ID>","type":"Role","userName":"<role name>"}},"type":"AssumedRole"},"vpcEndpointId":null}         }       },       "subjectResource": {         "name": "",         "type": "",         "providerUniqueId": "",         "externalId": "<external ID>",         "region": "us-east-1",         "kubernetesCluster": "",         "kubernetesNamespace": "",         "account": {"externalId":"<external ID>","id":"<ID>"}       },       "matchedRules": " ruleId: ; ruleName: <RULE NAME> "     }   } }   This was strange behavior but likely was caused by the default setting of BREAK_ONLY_BEFORE_DATE=true   To remedy this I edited the sourcetype config for wiz by adding the following: BREAK_ONLY_BEFORE ={[\r\n]\s+\"event\"\: BREAK_ONLY_BEFORE_DATE = false Note I left the value below as True SHOULD_LINEMERGE = true   However after clicking save the following changes were made: BREAK_ONLY_BEFORE ={[\r\n]\s+\"event\"\: LINE_BREAKER = {[\r\n]\s+\"event SHOULD_LINEMERGE = false   The configuration for BREAK_ONLY_BEFORE_DATE was unable to be saved and SHOULD_LINEMERGE was unable to be set to true while BREAK_ONLY_BEFORE was present.   I tried performing this change many times over hours and tried creating unrelated sourcetypes with BREAK_ONLY_BEFORE_DATE but was unable to set this setting on splunk cloud.  In addition, any attempt to set SHOULD_LINEMERGE to true while BREAK_ONLY_BEFORE was present resulted in SHOULD_LINEMERGE being set to false and LINE_BREAKER being set to the same value as BREAK_ONLY_BEFORE Other settings were able to be set as expected.  A final note for information is timestamp was set to auto. Are these configurations invalid in general or just unable to be set in settings > sourcetypes > advanced in splunk cloud? As an additional note no settings applied were able to set the event breaking to earlier behavior and I was forced to revert the change on the input back to sourcetype _json where breaking worked as expected.  Would appreciate any answers and happy to provide more info if needed Apologies for the long read.
Your original search is rather unclear, so here is an attempt at removing the need to join, i.e. to do it the Splunk way, so it searches both data sets and creates the common fields  Your original j... See more...
Your original search is rather unclear, so here is an attempt at removing the need to join, i.e. to do it the Splunk way, so it searches both data sets and creates the common fields  Your original join attempt wanted source_host but you are splitting by ticket number, so is is possible that there can be multiple source_host per ticket number? Can there also be more than one instance per ticket number? (index=acn_ac_snow_ticket_idx code_message=create uid="*Saml : Days to expire*" OR uid="*Self_Signed : Days to expire*" OR uid="*CA : Days to expire*" OR uid="*Entrust : Days to expire*") OR (index=acn_lendlease_certificate_tier3_idx tower=*) | rex field=_raw "\"(?<INC>INC\d+)," | eval ticket=coalesce(INC, Ticket_Number) | rex field=uid "(?i)^(?P<snow_source_host>.+?)__" | eval source_host=coalesce(snow_source_host, source_host) | stats latest(tower) as Tower, latest(source_host) as source_host , latest(metric_value) as "Days To Expire", latest(alert_value) as alert_value, latest(add_info) as "Additional Info" values(instance) by ticket | eval alert_value=case(alert_value==100,"Active",alert_value==300,"About to Expire", alert_value==500,"Expired") | where alert_value="Active" | search Tower="*" AND alert_value="*" | sort "Days To Expire" | rename instance as "Serial Number / Server ID", Tower as "Certificate Type" , source_host as Certificate , alert_value as "Certificate Status" Not sure if this will give you what you want - but if not, please provide some anonymised data for each data type and show how you are trying to combine them because it's not clear from the search.
yes
and your second data set contains these fields? tower, metric_value, alert_value, add_info, instance, source_host, Ticket_Number
yes  I have created regex to extract incident details and source host
Are you using selfjoin or join? Either way, selfjoin is not the right command - join is also not the way to do things in Splunk as it has limitations, however, your SPL indicates your 2 data sets ha... See more...
Are you using selfjoin or join? Either way, selfjoin is not the right command - join is also not the way to do things in Splunk as it has limitations, however, your SPL indicates your 2 data sets have  index=acn_ac_snow_ticket_idx - INC (Ticket_Number) uid log_description source_host index=acn_lendlease_certificate_tier3_idx - tower, metric_value, alert_value, add_info, instance, source_host and you are trying to join these two on source_host
index=acn_ac_snow_ticket_idx code_message=create uid="*Saml : Days to expire*" OR uid="*Self_Signed : Days to expire*" OR uid="*CA : Days to expire*" OR uid="*Entrust : Days to expire*" | rex field=... See more...
index=acn_ac_snow_ticket_idx code_message=create uid="*Saml : Days to expire*" OR uid="*Self_Signed : Days to expire*" OR uid="*CA : Days to expire*" OR uid="*Entrust : Days to expire*" | rex field=_raw "\"(?<INC>INC\d+)," | rex field=uid "(?i)^(?P<source_host>.+?)__" | table INC uid log_description source_host | dedup INC uid log_description source_host | rename INC as "Ticket_Number" | selfjoin source_host [ search index=acn_lendlease_certificate_tier3_idx tower=* | table *] | stats latest(tower) as Tower, latest(source_host) as source_host , latest(metric_value) as "Days To Expire", latest(alert_value) as alert_value, latest(add_info) as "Additional Info" by instance,Ticket_Number | eval alert_value=case(alert_value==100,"Active",alert_value==300,"About to Expire", alert_value==500,"Expired") | where alert_value="Active" | search Tower="*" AND alert_value="*" | sort "Days To Expire" | rename instance as "Serial Number / Server ID", Tower as "Certificate Type" , source_host as Certificate , alert_value as "Certificate Status" I am trying to map incident number with respect to source_host using join command but its not working as expected
What do you have in your real search before you do the eventstats as it will push all the data to the search head, including _raw, so unless you use the fields statement you will be sending all the e... See more...
What do you have in your real search before you do the eventstats as it will push all the data to the search head, including _raw, so unless you use the fields statement you will be sending all the event data to the SH. You are also doing lots of multivalue splits, which is going to be pretty memory hungry on the SH. Building a tree is a tricky thing in Splunk, but if your network paths are not often changing, it may be possible to create a lookup that for 'Server-A' you can lookup its network and discover the behind firewall state. What is the depth of the tree in your case, your example is 3 tier, going from server via the LB - if it's only 3 tier, then you could perhaps build your pathways just be fetching the name="LoadBalancer" objects and using stats values() rather than eventstats to create the lookup - as at that point you don't care about the IPs.
The values() statement requires 'eval', i.e. | eventstats values(eval(if(match(name,"student-1"), name, null()))) as student by grade
mvfilter only takes a single field https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/MultivalueEvalFunctions#mvfilter.28.26lt.3Bpredicate.26gt.3B.29 Use mvmap instead | makeresult... See more...
mvfilter only takes a single field https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/MultivalueEvalFunctions#mvfilter.28.26lt.3Bpredicate.26gt.3B.29 Use mvmap instead | makeresults | eval fullcode= "code-abc-1111,code-abc-2222,code-xyz-1111,code-xyz-222" | eval partialcode="code-abc" | makemv delim="," fullcode | eval fullcode2=mvmap(fullcode, if(match(fullcode,partialcode), fullcode, null()))  
That's not a valid rex sed statement, use this example | makeresults | eval ip=split("010.1.2.3,10.013.2.3",",") | mvexpand ip | rex field=ip mode=sed "s/\b0+\B//"  
Hello All, I'm trying to remove leading zeros in IP addresses using rex and mode=sed . the regular expression I'm trying to use for substitution is "\b0+\B" . However, it's not returning the require... See more...
Hello All, I'm trying to remove leading zeros in IP addresses using rex and mode=sed . the regular expression I'm trying to use for substitution is "\b0+\B" . However, it's not returning the required output. Example : | rex field=<IP address field> mode=sed "\b0+\B" I even tried with double back slash. But, no luck. Kindly assist to resolve this issue. Regards, Sid