All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So, my question about what you have in your real search before eventstats is significant because ALL the data you have in the search up to eventstats will travel to the search head. Using the fields ... See more...
So, my question about what you have in your real search before eventstats is significant because ALL the data you have in the search up to eventstats will travel to the search head. Using the fields statement will remove fields you don't want from the data sent to the SH. If you have a table statement before the eventstats, then that is also a transforming command so will cause the data to go to the SH - for efficiency you want to keep as much of the search on the indexers and only go to the SH with the minimum amount of data you actually need. Can you post the full search? Your 3rd eventstats is splitting by servergroup, which is now a multivalue field, which  As for creating the lookup, from your examples, I surmise that if "name" is titled "LoadBalancer-XXX" then it is a load balancer  so collect all network names for all load balancers into a lookup, e.g. | makeresults format=csv data="ip,name,network, 192.168.1.1,LoadBalancer-A,Loadbalancer-to-Server 172.168.1.1,LoadBalancer-A,Firewall-to-Loadbalancer 172.168.1.2,LoadBalancer-B,Loadbalancer-to-Server 192.168.1.6,server-A,Loadbalancer-to-Server 192.168.1.7,server-A,Loadbalancer-to-Server 192.168.1.8,server-B,Loadbalancer-to-Server 192.168.1.9,server-C,network-1 192.168.1.9,server-D,network-2" | search network="Firewall-to-Loadbalancer" OR name="LoadBalancer-*" | stats values(network) as network by name | eval behindfirewall = if(match(network,"Firewall-to-Loadbalancer"),"1","0") | outputlookup output_format=splunk_mv_csv firewall.csv Then do | lookup firewall.csv network OUTPUT behindfirewall Not sure if that will do what you want, but maybe it gives you some ideas - I don't know your data well enough to know what's what.
I'm a little unclear on your requirement, but your working eventstats example that gives you the "Expected result" of grade name student A student-1-a student-1-a     student-1-b  ... See more...
I'm a little unclear on your requirement, but your working eventstats example that gives you the "Expected result" of grade name student A student-1-a student-1-a     student-1-b     student-1-c A student-1-b student-1-a     student-1-b     student-1-c ... so you want all values of student-X-Y to be included for each combination of student-X-Y? In that case, you don't need the match statement, so what is the issue? Depending on the data volume, eventstats can be slower, so you could use this variant ... | eval partialname=substr(name,0,9) | stats values(name) as student by grade partialname | eval name=student | mvexpand name that uses stats, which will be more efficient than eventstats, but then mvexpand will be slower, but you cna measure the performance if volume is an issue.
Search-time extractions are preferred over index-time extractions, because they use less storage (none) and don't slow down indexing. You can extract fields automatically at search time by adding EX... See more...
Search-time extractions are preferred over index-time extractions, because they use less storage (none) and don't slow down indexing. You can extract fields automatically at search time by adding EXTRACT settings in the sourcetypes's props.conf stanza. [xmlwineventlog] EXTRACT-s_p_n = (server_principal_name:(?<server_principal_name>\S+)) in EventData_Xml EXTRACT-s_i_n = (server_instance_name:(?<server_instance_name>\S+)) in EventData_Xml EXTRACT-a_i = (action_id:(?<action_id>\S+)) in EventData_Xml EXTRACT-succeeded = (succeeded:(?<succeeded>\S+)) in EventData_Xml  
Try this | rex field=message "reqPath\\\":\\\".*/(?<reqPath>\w+)" where the .* is a greedy capture up to the final / character
Good luck!
Take a look at https://ideas.splunk.com/ However, I suspect you will not get any traction with that, your example is defining colour based on index and sourcetype rather than Splunk deciding on the... See more...
Take a look at https://ideas.splunk.com/ However, I suspect you will not get any traction with that, your example is defining colour based on index and sourcetype rather than Splunk deciding on the colour to use, so I am not sure I understand your original distinction between pleasant and unpleasant results and how that is defined. Anyway, have you looked at event types, where you can define colours for events.  
Yes, you can automate Splunk upgrades.  Many customers do so using a variety of tools. There's nothing special needed, just teach your automation to perform the same steps you would do manually. Th... See more...
Yes, you can automate Splunk upgrades.  Many customers do so using a variety of tools. There's nothing special needed, just teach your automation to perform the same steps you would do manually. Those steps are documented at https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller
Strictly speaking, none of it is necessary, but it does make it easier to get data into Splunk.  You already use the UF and HF so you must have found them necessary for doing certain things.  The... See more...
Strictly speaking, none of it is necessary, but it does make it easier to get data into Splunk.  You already use the UF and HF so you must have found them necessary for doing certain things.  There may be other ways to do those things, but don't fix what isn't broken. The Deployment Server is there to help manage your UFs.  Without a DS, you have manage each UF separately and manually (unless you have automation to help). For more about the DS and what is does, see https://docs.splunk.com/Documentation/Splunk/9.2.1/Updating/Aboutdeploymentserver#What_is_deployment_server.3F .  The system requirements are at https://docs.splunk.com/Documentation/Splunk/9.2.1/Updating/Planadeployment#Deployment_server_system_requirements System requirements for UFs are at https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/Deploy
Try something like this | eval row=mvrange(0,2) | mvexpand row | eval group=if(row=0,A,B) | eval field=if(row=0,"A","B") | stats count(eval(field=="A")) as A count(eval(field=="B")) as B by group
Have you tried the second option (allowRemoteLogin)? I can't say I've seen this myself, but it could be that you need to temporarily change that setting to get around the default password problem. If... See more...
Have you tried the second option (allowRemoteLogin)? I can't say I've seen this myself, but it could be that you need to temporarily change that setting to get around the default password problem. If that works, then once you've changed your password, you should be able to revert the allowRemoteLogin setting. The following should help for values: # The following 'allowRemoteLogin' setting controls remote management of your splunk instance. # - If set to 'always', all remote logins are allowed. # - If set to 'never', only local logins to splunkd will be allowed. Note that this will still allow # remote management through splunkweb if splunkweb is on the same server. # - If set to 'requireSetPassword' (default behavior): # 1. In the free license, remote login is disabled. # 2. In the pro license, remote login is only disabled for the admin user that has not changed their default password
I would try to mirror the upgrade best practices to bring down splunk servers based on their role with the exception of bringing down the cluster manager and deployment server last. search head in... See more...
I would try to mirror the upgrade best practices to bring down splunk servers based on their role with the exception of bringing down the cluster manager and deployment server last. search head indexers (sequentially) CM DS Reverse order to bring systems back online. Ideally, the monitoring console would be installed on the manager node as the last device to come offline and first to come online monitor the state of the cluster during shutdown/boot.   I HIGHLY RECOMMEND getting a second opinion on this from support if you can. I haven't had to go through this process yet personally, this is just my thought process. Hope this helps.
Upgraded universal splunk universal forwarder from 9.0.2 to 9.1.0.  ./splunk list monitor gives me the following error with default password : "Remote login has been disabled for 'admin' with the de... See more...
Upgraded universal splunk universal forwarder from 9.0.2 to 9.1.0.  ./splunk list monitor gives me the following error with default password : "Remote login has been disabled for 'admin' with the default password. Either set the password, or override by changing the 'allowRemoteLogin' setting in your server.conf file." for the first time. ./splunk edit user admin -password <newpassword> -auth admin:changeme tried above command to reset default password: still gives me : "Remote login has been disabled for 'admin' with the default password. Either set the password, or override by changing the 'allowRemoteLogin' setting in your server.conf file." Looking for any answers.
Looking for recommendations for automating the Splunk version upgrade process for a clustered (indexer & search head cluster) deployment. I'm curious if I can consolidate the upgrade process into a c... See more...
Looking for recommendations for automating the Splunk version upgrade process for a clustered (indexer & search head cluster) deployment. I'm curious if I can consolidate the upgrade process into a centrally automated solution.   Details Windows server based environment indexer cluster search cluster multisite deployment server license/MC server   Thanks in advance!
sounds like it's by design https://community.splunk.com/t5/Knowledge-Management/summary-indexing-with-sisat-distinct-count-without-the-list-of/m-p/29384/highlight/true#M266 for fill_summary_index.p... See more...
sounds like it's by design https://community.splunk.com/t5/Knowledge-Management/summary-indexing-with-sisat-distinct-count-without-the-list-of/m-p/29384/highlight/true#M266 for fill_summary_index.py, described here https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/Managesummaryindexgapsandoverlaps  
Upgraded to splunk universal forward 9.1.0 from 9.0.2.  ./splunk list monitor gives me the following error with default password : "Remote login has been disabled for 'admin' with the default passwo... See more...
Upgraded to splunk universal forward 9.1.0 from 9.0.2.  ./splunk list monitor gives me the following error with default password : "Remote login has been disabled for 'admin' with the default password. Either set the password, or override by changing the 'allowRemoteLogin' setting in your server.conf file." for the first time. tried above command to reset default password: still gives me : "Remote login has been disabled for 'admin' with the default password. Either set the password, or override by changing the 'allowRemoteLogin' setting in your server.conf file." Looking for any answers.
Hi, I'm not able to integrate SPlunk with Nozomi, with the available app (Nozomi Networks Universal Add-on), on the other hand I've tested the legacy addon and receive the alerys/assets but not with ... See more...
Hi, I'm not able to integrate SPlunk with Nozomi, with the available app (Nozomi Networks Universal Add-on), on the other hand I've tested the legacy addon and receive the alerys/assets but not with full info. The server (Nozomi Guardian) is self-signed. After configuring the latest version and setting up the inputs for receiving alerts, asset etc. There's no data being received in the index, and from the splunk logs I see the following:   06-13-2024 21:23:01.529 +0200 ERROR ExecProcessor [3854374 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-nozomi-networks-universal-add-on/bin/universal_session.py" HTTPSConnectionPool(host='192.168.1.4', port=443): Max retries exceeded with url: /api/open/sign_in (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1106)'))) I tought the solution could be by just disabling the ssl verification, but then why the legacy addon is working fine but the new version is not? In case I need to disable SSL verification, would like to know where is the right file and parameter.   thank you,  
So I have Splunk Cloud, but we still use a Heavy Forwarder, Universal Forwarder and a Deployment server. The UF server has definitely come into hand for grabbing local data. However, I'm not sure wha... See more...
So I have Splunk Cloud, but we still use a Heavy Forwarder, Universal Forwarder and a Deployment server. The UF server has definitely come into hand for grabbing local data. However, I'm not sure what the Deployment server is for. We do use the Heavy Forwarder for various things.  Does anyone have documentation of what is necessary and what is a nicety? And do they have knowledge on the specs needed? 
I have data with two fields that share a static range of 10 values.  I'd like to show a column chart with the buckets on the X axis and two bars in each bucket, one for field A, the other for field B... See more...
I have data with two fields that share a static range of 10 values.  I'd like to show a column chart with the buckets on the X axis and two bars in each bucket, one for field A, the other for field B. This doesn't work: index=foo message="bar" | stats count as "Field A" by A | append [ search index=foo message="bar" | stats count as "Field B" by B ]  I'm sure I'm missing something obvious ... To reiterate, fields A and B are present in all events returned and share the same "buckets".  Call them strings like "Group 1", "Group 2", etc.  So A="Group 3" and B="Group 6" could be in the same event and in the chart I should have a count added for Groups 3 for the Field A column and Group 6 for the Field B column. Thanks!
No, outer result is not important as i am looking to create pass/fail charts with the inner results of each corresponding test  
I have UFs installed on some sql servers that forward certain events (according to eventID) to my Splunk.  I have created a search query to parse out the data need to make a nice table. However, ide... See more...
I have UFs installed on some sql servers that forward certain events (according to eventID) to my Splunk.  I have created a search query to parse out the data need to make a nice table. However, ideally I'd like to do this at ingest time instead of at search.  I was told by my manager to research props.conf and transforms.conf and here I am Not sure if that is the proper route or if there are other suggestions. Thank you.    index="wineventlog" | rex field=EventData_Xml (server_principal_name:(?<server_principal_name>\S+)) | rex field=EventData_Xml (server_instance_name:(?<server_instance_name>\S+)) | rex field=EventData_Xml (action_id:(?<action_id>\S+)) | rex field=EventData_Xml (succeeded:(?<succeeded>\S+)) | table _time, action_id, succeeded, server_principal_name, server_instance_name