All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Strictly speaking, none of it is necessary, but it does make it easier to get data into Splunk.  You already use the UF and HF so you must have found them necessary for doing certain things.  The... See more...
Strictly speaking, none of it is necessary, but it does make it easier to get data into Splunk.  You already use the UF and HF so you must have found them necessary for doing certain things.  There may be other ways to do those things, but don't fix what isn't broken. The Deployment Server is there to help manage your UFs.  Without a DS, you have manage each UF separately and manually (unless you have automation to help). For more about the DS and what is does, see https://docs.splunk.com/Documentation/Splunk/9.2.1/Updating/Aboutdeploymentserver#What_is_deployment_server.3F .  The system requirements are at https://docs.splunk.com/Documentation/Splunk/9.2.1/Updating/Planadeployment#Deployment_server_system_requirements System requirements for UFs are at https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/Deploy
Try something like this | eval row=mvrange(0,2) | mvexpand row | eval group=if(row=0,A,B) | eval field=if(row=0,"A","B") | stats count(eval(field=="A")) as A count(eval(field=="B")) as B by group
Have you tried the second option (allowRemoteLogin)? I can't say I've seen this myself, but it could be that you need to temporarily change that setting to get around the default password problem. If... See more...
Have you tried the second option (allowRemoteLogin)? I can't say I've seen this myself, but it could be that you need to temporarily change that setting to get around the default password problem. If that works, then once you've changed your password, you should be able to revert the allowRemoteLogin setting. The following should help for values: # The following 'allowRemoteLogin' setting controls remote management of your splunk instance. # - If set to 'always', all remote logins are allowed. # - If set to 'never', only local logins to splunkd will be allowed. Note that this will still allow # remote management through splunkweb if splunkweb is on the same server. # - If set to 'requireSetPassword' (default behavior): # 1. In the free license, remote login is disabled. # 2. In the pro license, remote login is only disabled for the admin user that has not changed their default password
I would try to mirror the upgrade best practices to bring down splunk servers based on their role with the exception of bringing down the cluster manager and deployment server last. search head in... See more...
I would try to mirror the upgrade best practices to bring down splunk servers based on their role with the exception of bringing down the cluster manager and deployment server last. search head indexers (sequentially) CM DS Reverse order to bring systems back online. Ideally, the monitoring console would be installed on the manager node as the last device to come offline and first to come online monitor the state of the cluster during shutdown/boot.   I HIGHLY RECOMMEND getting a second opinion on this from support if you can. I haven't had to go through this process yet personally, this is just my thought process. Hope this helps.
Upgraded universal splunk universal forwarder from 9.0.2 to 9.1.0.  ./splunk list monitor gives me the following error with default password : "Remote login has been disabled for 'admin' with the de... See more...
Upgraded universal splunk universal forwarder from 9.0.2 to 9.1.0.  ./splunk list monitor gives me the following error with default password : "Remote login has been disabled for 'admin' with the default password. Either set the password, or override by changing the 'allowRemoteLogin' setting in your server.conf file." for the first time. ./splunk edit user admin -password <newpassword> -auth admin:changeme tried above command to reset default password: still gives me : "Remote login has been disabled for 'admin' with the default password. Either set the password, or override by changing the 'allowRemoteLogin' setting in your server.conf file." Looking for any answers.
Looking for recommendations for automating the Splunk version upgrade process for a clustered (indexer & search head cluster) deployment. I'm curious if I can consolidate the upgrade process into a c... See more...
Looking for recommendations for automating the Splunk version upgrade process for a clustered (indexer & search head cluster) deployment. I'm curious if I can consolidate the upgrade process into a centrally automated solution.   Details Windows server based environment indexer cluster search cluster multisite deployment server license/MC server   Thanks in advance!
sounds like it's by design https://community.splunk.com/t5/Knowledge-Management/summary-indexing-with-sisat-distinct-count-without-the-list-of/m-p/29384/highlight/true#M266 for fill_summary_index.p... See more...
sounds like it's by design https://community.splunk.com/t5/Knowledge-Management/summary-indexing-with-sisat-distinct-count-without-the-list-of/m-p/29384/highlight/true#M266 for fill_summary_index.py, described here https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/Managesummaryindexgapsandoverlaps  
Upgraded to splunk universal forward 9.1.0 from 9.0.2.  ./splunk list monitor gives me the following error with default password : "Remote login has been disabled for 'admin' with the default passwo... See more...
Upgraded to splunk universal forward 9.1.0 from 9.0.2.  ./splunk list monitor gives me the following error with default password : "Remote login has been disabled for 'admin' with the default password. Either set the password, or override by changing the 'allowRemoteLogin' setting in your server.conf file." for the first time. tried above command to reset default password: still gives me : "Remote login has been disabled for 'admin' with the default password. Either set the password, or override by changing the 'allowRemoteLogin' setting in your server.conf file." Looking for any answers.
Hi, I'm not able to integrate SPlunk with Nozomi, with the available app (Nozomi Networks Universal Add-on), on the other hand I've tested the legacy addon and receive the alerys/assets but not with ... See more...
Hi, I'm not able to integrate SPlunk with Nozomi, with the available app (Nozomi Networks Universal Add-on), on the other hand I've tested the legacy addon and receive the alerys/assets but not with full info. The server (Nozomi Guardian) is self-signed. After configuring the latest version and setting up the inputs for receiving alerts, asset etc. There's no data being received in the index, and from the splunk logs I see the following:   06-13-2024 21:23:01.529 +0200 ERROR ExecProcessor [3854374 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-nozomi-networks-universal-add-on/bin/universal_session.py" HTTPSConnectionPool(host='192.168.1.4', port=443): Max retries exceeded with url: /api/open/sign_in (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1106)'))) I tought the solution could be by just disabling the ssl verification, but then why the legacy addon is working fine but the new version is not? In case I need to disable SSL verification, would like to know where is the right file and parameter.   thank you,  
So I have Splunk Cloud, but we still use a Heavy Forwarder, Universal Forwarder and a Deployment server. The UF server has definitely come into hand for grabbing local data. However, I'm not sure wha... See more...
So I have Splunk Cloud, but we still use a Heavy Forwarder, Universal Forwarder and a Deployment server. The UF server has definitely come into hand for grabbing local data. However, I'm not sure what the Deployment server is for. We do use the Heavy Forwarder for various things.  Does anyone have documentation of what is necessary and what is a nicety? And do they have knowledge on the specs needed? 
I have data with two fields that share a static range of 10 values.  I'd like to show a column chart with the buckets on the X axis and two bars in each bucket, one for field A, the other for field B... See more...
I have data with two fields that share a static range of 10 values.  I'd like to show a column chart with the buckets on the X axis and two bars in each bucket, one for field A, the other for field B. This doesn't work: index=foo message="bar" | stats count as "Field A" by A | append [ search index=foo message="bar" | stats count as "Field B" by B ]  I'm sure I'm missing something obvious ... To reiterate, fields A and B are present in all events returned and share the same "buckets".  Call them strings like "Group 1", "Group 2", etc.  So A="Group 3" and B="Group 6" could be in the same event and in the chart I should have a count added for Groups 3 for the Field A column and Group 6 for the Field B column. Thanks!
No, outer result is not important as i am looking to create pass/fail charts with the inner results of each corresponding test  
I have UFs installed on some sql servers that forward certain events (according to eventID) to my Splunk.  I have created a search query to parse out the data need to make a nice table. However, ide... See more...
I have UFs installed on some sql servers that forward certain events (according to eventID) to my Splunk.  I have created a search query to parse out the data need to make a nice table. However, ideally I'd like to do this at ingest time instead of at search.  I was told by my manager to research props.conf and transforms.conf and here I am Not sure if that is the proper route or if there are other suggestions. Thank you.    index="wineventlog" | rex field=EventData_Xml (server_principal_name:(?<server_principal_name>\S+)) | rex field=EventData_Xml (server_instance_name:(?<server_instance_name>\S+)) | rex field=EventData_Xml (action_id:(?<action_id>\S+)) | rex field=EventData_Xml (succeeded:(?<succeeded>\S+)) | table _time, action_id, succeeded, server_principal_name, server_instance_name
Are eventStartsFrom and eventEndsAt both set in the events you want to retrieve or are they in separate but correlated events?
Splunk Enterprise 9.0.6 and building a summary index of sourcenumbers (count) and distinct destinations called (dc(destinationnumber)) When i run this: ... | stats count dc(destinationnumber)... See more...
Splunk Enterprise 9.0.6 and building a summary index of sourcenumbers (count) and distinct destinations called (dc(destinationnumber)) When i run this: ... | stats count dc(destinationnumber) by sourcenumber I get something like sourcenumber,count,dc(destinationnumber) +15551234567,10,8 indicating it called 10 times to 8 different numbers. adsf perfect. But with this: ... | sistats count dc(destinationnumber) by sourcenumber   i get: psrsvd_ct_destinationnumber,psrsvd_gc,psrsvd_v, psrsvd_vm_destinationnumber 10,10,1,+19991234567;2,+18881234567;2,+17771234567;1,+15551234567;1 (etc) Found no clear help in the sistats page and other posts like this one it seems to work (though older posts and not using count) Best guess is that vm column 'preserves' the details, but idk why dc() isn't working like I expect.
source_address=$token.source.address$  It could be that the events that are being returned are where the $token.source.address$ value exists elsewhere in the event.
You could lookup the host and dest_port to retrieve another value from the lookup store e.g. last time accessed (if you have saved that as well), then if no data is retrieved, the host and dest_port ... See more...
You could lookup the host and dest_port to retrieve another value from the lookup store e.g. last time accessed (if you have saved that as well), then if no data is retrieved, the host and dest_port is unknown
Hi Antonio, to avoid this error (assuming this is a non-production environment) you can set splunkPlatform.insecureSkipVerify to "true" in the values.yaml file you use to deploy the collector:  http... See more...
Hi Antonio, to avoid this error (assuming this is a non-production environment) you can set splunkPlatform.insecureSkipVerify to "true" in the values.yaml file you use to deploy the collector:  https://github.com/signalfx/splunk-otel-collector-chart/blob/320b40a492bc479b12beb4aad20a85e1a9fd12c1/helm-charts/splunk-otel-collector/values.yaml#L62
This shouldn't be rocket surgery. But I expect that, since Splunk was acquired by Cisco, this will never be resolved directly. Thanks to grangerx for doing God's Splunk's work.
I was able to solve this halfway through writing this.  For future reference, you cant have the $SPlunk_HOME referenced in the $SPLUNK_DB. At least for me, the server hadnt restarted and updated the... See more...
I was able to solve this halfway through writing this.  For future reference, you cant have the $SPlunk_HOME referenced in the $SPLUNK_DB. At least for me, the server hadnt restarted and updated the value, so it didnt recognize it.   I had to set the path manually,  $SPLUNK_DB=/export/opt/splunk/data Don't forget to leave the trailing / out.  The you can have your indexes.conf look like: homePath = $SPLUNK_DB/hot/$_index_name/db coldPath = $SPLUNK_DB/cold/$_index_name/colddb