All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have programs which write status events to Splunk. At the beginning they write EVENT=START and at the end, they write EVENT=END, both with a matching UID. I have created an alert which monit... See more...
Hello, I have programs which write status events to Splunk. At the beginning they write EVENT=START and at the end, they write EVENT=END, both with a matching UID. I have created an alert which monitors for a START event without a corresponding END event, in order to find when a program may terminate abruptly. The alert is:   index=indxtst | table _time source EVENT_TYPE EVENT_SUBTYPE UID EVENT | eval stat=case(EVENT=="START","START",EVENT=="END","END") | eventstats dc(stat) as dc_stat by UID | search dc_stat=1 AND stat=START   This alert works fine, except sometimes it catches it while the program is running and simply hasn't written an END event yet. To fix this, I would like to add a delay, but that is not working.    index=indxtst | table _time source EVENT_TYPE EVENT_SUBTYPE UID EVENT | eval stat=case(EVENT=="START","START",EVENT=="END","END") | eventstats dc(stat) as dc_stat by UID | search dc_stat=1 AND stat=START AND earliest==-15m AND latest==-5m    This pulls back no records at all, even when appropriate testing data is created. What am I doing wrong?
index=XXX sourcetype=XXX [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=inner host [search `gold_mpstat` OR `silv... See more...
index=XXX sourcetype=XXX [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=inner host [search `gold_mpstat` OR `silver_mpstat` OR `platinum_mpstat` OR `palladium_mpstat` [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |stats max(eval(id+1)) as cores by host] |eval pct_CPU = round(total_cpu/cores,2) |stats max(total_cpu) as total_cpu, max(pct_CPU) as "CPU %" by user,host,cores |table host user cores total_cpu,"CPU %" |sort - "CPU %"|head 10   If you can look at the above screenshot, from the second column we have ADS-IDs and service-IDS mostly end up with s,g,p according to our environments like silver, gold and platinum. We have ADS-IDS in |  bd_users_hierarchy.csv lookup file, please check below screenshot.(Note: for security reasons, have to grayed out email addresses. And service-IDS are in the below index, please check below screenshot index = imdc_ops_13m sourcetype = usecase_contact app_id="*" | dedup app_id | table _time app_id app_owner app_team_dl     I was using subsearch using join but not successful. Any help is appreciated.  
Hi, I started using tags by tagging my hosts with the environment they are in and the service the host. Using these tags in log/event indices works perfectly well, but I am not able to filter by tag... See more...
Hi, I started using tags by tagging my hosts with the environment they are in and the service the host. Using these tags in log/event indices works perfectly well, but I am not able to filter by tags in mstats. I tried many variations of "WHERE tag=env.prod" or "WHERE "tag::host"="env.prod" but none return any results. I checked that these tags really are there with mpreview which shows all the tags on the specific hosts and I also was able to filter with a small workaround using the tags command:   | mstats rate(os.unix.nmon.storage.diskread) AS read rate(os.unix.nmon.storage.diskwrite) AS write WHERE `my-metric-indizes` AND (host=*) BY host span=5m | tags | WHERE "service.vault" IN (tag) AND "env.prod" in (tag) | stats sum(read) AS read, sum(write) AS write by _time,host | timechart max(read) as read, max(write) as write bins=1000 by host   Is there a way to filter by a tag directly in mstats? The workaround is not very performance friendly...
I have many dashboards that are already in the Classical Dashboard format. These have the source code in the form of an XML. I made a new dashboard through dashboard studio. I wish to migrate this da... See more...
I have many dashboards that are already in the Classical Dashboard format. These have the source code in the form of an XML. I made a new dashboard through dashboard studio. I wish to migrate this dashboard to the classical Dashboard format. I want to make my JSON based dashboard to a simple XML based dashboard. I tried researching and surfing the web but I only got resources for migration from classical dashboard to the new format. Could someone please help me with this. TIA.
Hello, my graphs in Spluk are becoming very many over time. I would therefore like to build a kind of accordion to be able to expand and collapse the individual areas. Can someone please tell me ho... See more...
Hello, my graphs in Spluk are becoming very many over time. I would therefore like to build a kind of accordion to be able to expand and collapse the individual areas. Can someone please tell me how to do this? Best regards Alex
After upgrading Splunk Universal Forwarders from version 8.1.x to 9.2.x on Windows machines in a distributed environment, my question is: Is it mandatory for both the old and new services to run simu... See more...
After upgrading Splunk Universal Forwarders from version 8.1.x to 9.2.x on Windows machines in a distributed environment, my question is: Is it mandatory for both the old and new services to run simultaneously (in parallel), or should only the new version be running ?   also the old version, must be deleted or not ? 
Hello, In ITSI, I would received alerts when an entity is critical or high and I cannot find how configure that On the other hand when a KPI alert is showing up I have the message "No impacted ... See more...
Hello, In ITSI, I would received alerts when an entity is critical or high and I cannot find how configure that On the other hand when a KPI alert is showing up I have the message "No impacted entities" in the episode review but there are entities in critical. thaks in advance for the help
Hello All, We have a server on which indexer and search head deployed. furthermore we are getting logs from UF and HF's. due to some requirement we might require some downtime for the server on whic... See more...
Hello All, We have a server on which indexer and search head deployed. furthermore we are getting logs from UF and HF's. due to some requirement we might require some downtime for the server on which Indexer and search head has been deployed. Will there be any log loss due to this server downtime? If yes how long logs will be lost there? If UF cache logs locally then for how log UF cache the logs, if there any dependency on cache memory available on UF server?  
So i have three servers in this Splunk infrastructure, a SH, an Indexer, and a forwarder. I have installed the free 10gb dev license as well as the 50Gb one, and am not using clustering anywhere.  I... See more...
So i have three servers in this Splunk infrastructure, a SH, an Indexer, and a forwarder. I have installed the free 10gb dev license as well as the 50Gb one, and am not using clustering anywhere.  I have installed and followed this guide to send test data to our boxes. https://splunkbase.splunk.com/app/1924   I set the app up on the forwarder, and i can see the data in the index i created, testindex on the indexer. I can view the sample data. I cannot however, view the data from the SH.  My problem rn is I can not find what I am missing. I have looked everywhere and cant figure it out. I have confirmed my server.conf, distsearch.conf, the outputs.conf on the forwarder, I have made the pass4symm keys on all machines similar, I can ping each server from one another, so connection is good.   What else can i check? Most of the splunk docs i see are for clustered env, and am struggling to find relevant docs.    I have set the SH to be the License Master, and both machines point to the SH as License Manager, yet on the SH, I do not see any instance other than itself to be the indexer.  When i go to add a new pool, like i see it on our DMC, i can only add itself as available indexers   On our production DMC, we have all of the indexers listed. I should be seeing the indexer or something showing up somewhere within the SH, but i dont see any mention anywhere. Checking _internal Logs, I just see its own Hostname Mentioned. Im having issues figuring out where im going wrong. The SH should see the indexer based on my findings and set up.    Any help or guidance would be appreciated. Thank you.     
Hello team, For context this is a splunk cloud environment with an es and an ad hoc search head. Today I tried to change an http event collector input from sourcetype _json to wiz. The wiz events ... See more...
Hello team, For context this is a splunk cloud environment with an es and an ad hoc search head. Today I tried to change an http event collector input from sourcetype _json to wiz. The wiz events are json events with a date midway through the event. Sample event (heavily obfuscated as it is company data): {   "event": {     "trigger": {       "source": "CLOUD_EVENTS",       "type": "Created",       "ruleId": "<rule_id>",       "ruleName": "WIZ-Splunk Integration"     },     "event": {       "name": "<eventname>",       "eventURL": "<url>",       "cloudPlatform": "AWS",       "timestamp": "2024-06-12T03:01:18Z",       "source": "<amazon source>",       "category": "List",       "path": null,       "actor": {         "name": "<account name>",         "type": "SERVICE_ACCOUNT",         "IP": "<FQDN>",         "actingAs": {           "name": "<role_name>",           "providerUniqueId": "<UniqID",           "type": "SERVICE_ACCOUNT",            "rawlog": {"addendum":null,"additionalEventData":null,"apiVersion":null,"awsRegion":"us-east-1","errorCode":null,"errorMessage":null,"eventCategory":"<event_category>","eventID":"<event_id>","eventName":"<event_name>","eventSource":"<amazon_link>","eventTime":"2024-06-12T03:01:18Z","eventType":"<type of event>","eventVersion":"<version number>","managementEvent":true,"readOnly":true,"recipientAccountId":"<account ID>","requestID":"<request_id>","requestParameters":{"DescribeVpcEndpointsRequest":{"VpcEndpointId":{"content":"<VPCENDPOINTID>","tag":1}}},"resources":null,"responseElements":null,"serviceEventDetails":null,"sessionCredentialFromConsole":null,"sharedEventID":null,"sourceIPAddress":"<source ip>","tlsDetails":null,"userAgent":"<user agent>","userIdentity":{"accountId":"<account ID>","arn":"<ARN>","invokedBy":"<USER>","principalId":"<principal ID>","sessionContext":{"attributes":{"creationDate":"2024-06-12T03:01:17Z","mfaAuthenticated":"false"},"sessionIssuer":{"accountId":"<account ID>","arn":"<ARN>","principalId":"<principal ID>","type":"Role","userName":"<role name>"}},"type":"AssumedRole"},"vpcEndpointId":null}         }       },       "subjectResource": {         "name": "",         "type": "",         "providerUniqueId": "",         "externalId": "<external ID>",         "region": "us-east-1",         "kubernetesCluster": "",         "kubernetesNamespace": "",         "account": {"externalId":"<external ID>","id":"<ID>"}       },       "matchedRules": " ruleId: ; ruleName: <RULE NAME> "     }   } } To accomplish the sourcetype name change I cloned the current configuration for _json under app search which was as follows: CHARSET = UTF-8 DATETIME_CONFIG= INDEXED_EXTRACTIONS=json KV_MODE=none SHOULD_LINEMERGE=true category=structured disabled=false pulldown_type=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true   This cloned the config successfully but notably put it under app 000-self-service rather than search. I then set the input to the new sourcetype wiz.   Following this change some events began breaking incorrectly at the first timestamp in the log, a behavior not previously observed on sourcetype _json which had the same config.   Sample broken event: Event1: {   "event": {     "trigger": {       "source": "CLOUD_EVENTS",       "type": "Created",       "ruleId": "<rule_id>",       "ruleName": "WIZ-Splunk Integration"     },     "event": {       "name": "<eventname>",       "eventURL": "<url>",       "cloudPlatform": "AWS",     Event 2:       "timestamp": "2024-06-12T03:01:18Z",       "source": "<amazon source>",       "category": "List",       "path": null,       "actor": {         "name": "<account name>",         "type": "SERVICE_ACCOUNT",         "IP": "<FQDN>",         "actingAs": {           "name": "<role_name>",           "providerUniqueId": "<UniqID",           "type": "SERVICE_ACCOUNT",            "rawlog": {"addendum":null,"additionalEventData":null,"apiVersion":null,"awsRegion":"us-east-1","errorCode":null,"errorMessage":null,"eventCategory":"<event_category>","eventID":"<event_id>","eventName":"<event_name>","eventSource":"<amazon_link>","eventTime":"2024-06-12T03:01:18Z","eventType":"<type of event>","eventVersion":"<version number>","managementEvent":true,"readOnly":true,"recipientAccountId":"<account ID>","requestID":"<request_id>","requestParameters":{"DescribeVpcEndpointsRequest":{"VpcEndpointId":{"content":"<VPCENDPOINTID>","tag":1}}},"resources":null,"responseElements":null,"serviceEventDetails":null,"sessionCredentialFromConsole":null,"sharedEventID":null,"sourceIPAddress":"<source ip>","tlsDetails":null,"userAgent":"<user agent>","userIdentity":{"accountId":"<account ID>","arn":"<ARN>","invokedBy":"<USER>","principalId":"<principal ID>","sessionContext":{"attributes":{"creationDate":"2024-06-12T03:01:17Z","mfaAuthenticated":"false"},"sessionIssuer":{"accountId":"<account ID>","arn":"<ARN>","principalId":"<principal ID>","type":"Role","userName":"<role name>"}},"type":"AssumedRole"},"vpcEndpointId":null}         }       },       "subjectResource": {         "name": "",         "type": "",         "providerUniqueId": "",         "externalId": "<external ID>",         "region": "us-east-1",         "kubernetesCluster": "",         "kubernetesNamespace": "",         "account": {"externalId":"<external ID>","id":"<ID>"}       },       "matchedRules": " ruleId: ; ruleName: <RULE NAME> "     }   } }   This was strange behavior but likely was caused by the default setting of BREAK_ONLY_BEFORE_DATE=true   To remedy this I edited the sourcetype config for wiz by adding the following: BREAK_ONLY_BEFORE ={[\r\n]\s+\"event\"\: BREAK_ONLY_BEFORE_DATE = false Note I left the value below as True SHOULD_LINEMERGE = true   However after clicking save the following changes were made: BREAK_ONLY_BEFORE ={[\r\n]\s+\"event\"\: LINE_BREAKER = {[\r\n]\s+\"event SHOULD_LINEMERGE = false   The configuration for BREAK_ONLY_BEFORE_DATE was unable to be saved and SHOULD_LINEMERGE was unable to be set to true while BREAK_ONLY_BEFORE was present.   I tried performing this change many times over hours and tried creating unrelated sourcetypes with BREAK_ONLY_BEFORE_DATE but was unable to set this setting on splunk cloud.  In addition, any attempt to set SHOULD_LINEMERGE to true while BREAK_ONLY_BEFORE was present resulted in SHOULD_LINEMERGE being set to false and LINE_BREAKER being set to the same value as BREAK_ONLY_BEFORE Other settings were able to be set as expected.  A final note for information is timestamp was set to auto. Are these configurations invalid in general or just unable to be set in settings > sourcetypes > advanced in splunk cloud? As an additional note no settings applied were able to set the event breaking to earlier behavior and I was forced to revert the change on the input back to sourcetype _json where breaking worked as expected.  Would appreciate any answers and happy to provide more info if needed Apologies for the long read.
index=acn_ac_snow_ticket_idx code_message=create uid="*Saml : Days to expire*" OR uid="*Self_Signed : Days to expire*" OR uid="*CA : Days to expire*" OR uid="*Entrust : Days to expire*" | rex field=... See more...
index=acn_ac_snow_ticket_idx code_message=create uid="*Saml : Days to expire*" OR uid="*Self_Signed : Days to expire*" OR uid="*CA : Days to expire*" OR uid="*Entrust : Days to expire*" | rex field=_raw "\"(?<INC>INC\d+)," | rex field=uid "(?i)^(?P<source_host>.+?)__" | table INC uid log_description source_host | dedup INC uid log_description source_host | rename INC as "Ticket_Number" | selfjoin source_host [ search index=acn_lendlease_certificate_tier3_idx tower=* | table *] | stats latest(tower) as Tower, latest(source_host) as source_host , latest(metric_value) as "Days To Expire", latest(alert_value) as alert_value, latest(add_info) as "Additional Info" by instance,Ticket_Number | eval alert_value=case(alert_value==100,"Active",alert_value==300,"About to Expire", alert_value==500,"Expired") | where alert_value="Active" | search Tower="*" AND alert_value="*" | sort "Days To Expire" | rename instance as "Serial Number / Server ID", Tower as "Certificate Type" , source_host as Certificate , alert_value as "Certificate Status" I am trying to map incident number with respect to source_host using join command but its not working as expected
Hello All, I'm trying to remove leading zeros in IP addresses using rex and mode=sed . the regular expression I'm trying to use for substitution is "\b0+\B" . However, it's not returning the require... See more...
Hello All, I'm trying to remove leading zeros in IP addresses using rex and mode=sed . the regular expression I'm trying to use for substitution is "\b0+\B" . However, it's not returning the required output. Example : | rex field=<IP address field> mode=sed "\b0+\B" I even tried with double back slash. But, no luck. Kindly assist to resolve this issue. Regards, Sid
If I used variable in the mvfilter match, i got the following error Error in 'EvalCommand': The arguments to the 'mvfilter' function are invalid. If I replaced the partialcode with a string, it... See more...
If I used variable in the mvfilter match, i got the following error Error in 'EvalCommand': The arguments to the 'mvfilter' function are invalid. If I replaced the partialcode with a string, it worked fine Please help. Thank you so much | makeresults | eval fullcode= "code-abc-1111,code-abc-2222,code-xyz-1111,code-xyz-222" | eval partialcode="code-abc" | makemv delim="," fullcode | eval fullcode2=mvfilter(match(fullcode,partialcode)) This one worked fine | makeresults | eval fullcode= "code-abc-1111,code-abc-2222,code-xyz-1111,code-xyz-222" | eval partialcode="code-abc" | makemv delim="," fullcode | eval fullcode2=mvfilter(match(fullcode,"code-abc"))  
Hello, Is it possible to use eventstats with conditions? For example: I only want to apply eventstats only if field name contains "student-1" | eventstats values(if(match(name,"student-1"), n... See more...
Hello, Is it possible to use eventstats with conditions? For example: I only want to apply eventstats only if field name contains "student-1" | eventstats values(if(match(name,"student-1"), name, null())) as student by  grade Please suggest. Thanks
I am trying to ingest a csv file and have indicated in the UF inputs.conf file as shown below [monitor://C:\<directory>\file.csv] index = csv_data sourcetype = csv crcSalt = <SOURCE> I created a ne... See more...
I am trying to ingest a csv file and have indicated in the UF inputs.conf file as shown below [monitor://C:\<directory>\file.csv] index = csv_data sourcetype = csv crcSalt = <SOURCE> I created a new index on the Splunk GUI page and even added the new index in indexes.conf on the Splunk machine. However, it seems like the data is not ingesting to the index 'csv_data' which I indicated. When I change the index in the UF inputs.conf to my lastChanceIndex, for some reason, it starts to ingest the csv data.  How do I make the data ingest to the csv_data index instead of the lastChanceIndex? Am I missing a step?
Hello, I need help improve efficiency of my search using eventstats. The search worked just fine, but when I applied to large set of data, it took too long.  Please suggest.  Thank you  IP 19... See more...
Hello, I need help improve efficiency of my search using eventstats. The search worked just fine, but when I applied to large set of data, it took too long.  Please suggest.  Thank you  IP 192.168.1.7 of server-A is connected to "LoadBalancer-to-Server" network, LoadBalancer-A is connected to "LoadBalancer-to-Server" network and "Firewall-to-Loadbalancer" network. So, server-A is behind a firewall.  (behindfirewall = "yes") ip name network behindfirewall 192.168.1.1 LoadBalancer-A Loadbalancer-to-Server yes 172.168.1.1 LoadBalancer-A Firewall-to-Loadbalancer yes 192.168.1.7 server-A Loadbalancer-to-Server yes 192.168.1.8 server-B Loadbalancer-to-Server yes 192.168.1.9 server-C network-1 no 192.168.1.9 server-D network-2 no   | makeresults format=csv data="ip,name,network, 192.168.1.1,LoadBalancer-A,Loadbalancer-to-Server 172.168.1.1,LoadBalancer-A,Firewall-to-Loadbalancer 192.168.1.7,server-A,Loadbalancer-to-Server 192.168.1.8,server-B,Loadbalancer-to-Server 192.168.1.9,server-C,network-1 192.168.1.9,server-D,network-2" | eventstats values(name) as servergroup by network | eventstats values(network) as networkgroup by name | eventstats values(networkgroup) as networkpath by servergroup | eval behindfirewall = if(match(networkpath,"Firewall-to-Loadbalancer"),"yes","no") | table ip, name, network, servergroup, networkgroup *  
Hi, I am trying to get the error percentage of the https response request but its not working as expected.   index="john-doe-index" | stats count AS Total count(eval(statusCode="2**")) as Success ... See more...
Hi, I am trying to get the error percentage of the https response request but its not working as expected.   index="john-doe-index" | stats count AS Total count(eval(statusCode="2**")) as Success | eval Failure = Total - Success | eval Percent_Failure = round((Failure/Total)*100)."%" | stats count by Percent_Failure     showing the following result.   I took the above query from previous answers, not sure why its not working on my end. because the ratio of 4xx , 2xx are high and result is showing 100% and count 1 all the time.   Thanks!
Hi, I see there is an option in the PSA deployment script to configure more than 1  Heimdall replicas. What are the advantages and disadvantages of using more than 1 Heimdall replica? We have my scr... See more...
Hi, I see there is an option in the PSA deployment script to configure more than 1  Heimdall replicas. What are the advantages and disadvantages of using more than 1 Heimdall replica? We have my scripts running on our PSAs, and I am trying to figure out if this would help with performance. Thanks Roberto
Hi Team, I have a field extraction  and a calculated field which is not working Please let me know whether there is any other way to extract it EXTRACT-User = \"path\"\:\"auth\/(abc|xyz)\/lo... See more...
Hi Team, I have a field extraction  and a calculated field which is not working Please let me know whether there is any other way to extract it EXTRACT-User = \"path\"\:\"auth\/(abc|xyz)\/login\/(?<User>[\w\_]+) EVAL-user = if(error="invalid credentials",User,'auth.display_name') "auth.display_name" is the existing field
Hello All, Perhaps I have the 64K $ question. I am trying to understand (better) the IOWAIT warnings and errors. The yellow and red icons, etc.  I know that IOWAIT can be an issue, and only on Linu... See more...
Hello All, Perhaps I have the 64K $ question. I am trying to understand (better) the IOWAIT warnings and errors. The yellow and red icons, etc.  I know that IOWAIT can be an issue, and only on Linux based servers. I will guess that running Splunk Enterprise on a virtual linux machine makes things harder. I have revised the Health Report Managaer settings per a Splunk forum posting, and the issue is resolved for the most part. I can run an "unreasonable"  search and get the warining icon, and then as the search progresses, the red error icon. I have run some linux commands like iostat,  and iotop while the search is running but do not see any useful data. I am just curious how Splunk determines the IOWAIT values as part of the health monitoring. I was also wondering if I reset the healh repoting values back to the default, how I might go about reducing the "IOWAIT" characteristic on the Splunk server. Thanks for any hints or tips ewholz