All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

For your dropdowns, where do the values come from? Are they static (known ahead of time and configured in the dashboard), or dynamic (the results of a search)? If more than one dropdown is selected,... See more...
For your dropdowns, where do the values come from? Are they static (known ahead of time and configured in the dashboard), or dynamic (the results of a search)? If more than one dropdown is selected, do you want both to used e.g. the count for DRDO in Bangalore is 1?
Sorry, I am a beginner. Where is the complete query. When i select a location Bangalore from drop down. The single value count for Final Status column should be displayed for that Location. If i sele... See more...
Sorry, I am a beginner. Where is the complete query. When i select a location Bangalore from drop down. The single value count for Final Status column should be displayed for that Location. If i select Company Name DRDO from dropdown, it should display Final Status single value count for that company.  Eg: Single value count for Bangalore location is 3. Single value count for Company DRDO is 1.
Hi @fahimeh , ES hasn't its own authentication method, it uses users from Splunk Enterprise, it only has its own roles. If you delete an user in Splunk Enterprise its isn't possible for that user a... See more...
Hi @fahimeh , ES hasn't its own authentication method, it uses users from Splunk Enterprise, it only has its own roles. If you delete an user in Splunk Enterprise its isn't possible for that user access the system, but probably the investigation and action from tha user continue to remain in the system, even if if you search an object created by that user you find an orphaned object. Ciao. Giuseppe
Reference material - https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 Normally (when you're not using indexed extractions), t... See more...
Reference material - https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 Normally (when you're not using indexed extractions), the data is split into chunks, metadata is added _to whole chunks_ and the chunks are sent downstream to HF/indexer for further processing. And first "heavy" (either HF or indexer) component which receives the data does all the heavy lifting and writes data to indexes or sends the parsed data out (and that data is not parsed again - if there are more components in the way parsed data is just forwarded to outputs and that's it). If you enable indexed extractions your data is parsed into indexed fields (which has its pros but also cons) and gets sent as parsed data which is not parsed again. (I'm not touching ingest actions topic in here). So you can either configure timestamp recognition on your UF based on the fields extracted from your json if you want to keep indexed extractions enabled or you can disable indexed extractions and parse json in search time - then you have to let your HF/idx know how to line break and do timestamp recognition. In either case it doesn't hurt to have a full set of settings for the sourcetypes on both layers (UF and HF/idx) - only the ones relevant in specific place are "active".
Hi @sgro777 , good for you, see next time! Ciao and happy splunking. Giuseppe P.S.: Karma Points are appreciated by all the Contributors
What is it that you need help with? You already have a query using the tokens (from the dropdowns?)
So to be clear,   We have different sites, where we deployed different HFs which will also act as DS, not DC. These HFs are connected with single MC to get licensing and stuff, these HFs cant commu... See more...
So to be clear,   We have different sites, where we deployed different HFs which will also act as DS, not DC. These HFs are connected with single MC to get licensing and stuff, these HFs cant communicate with each other so they dont know anything about each other. Now the senario is, We have 2 HF, which are only initial deployments, connecting with indexers as outputs.conf. will act as DS, one HF is connecting properly with uf and acticing as DS, second is not and forwarding the traffic to our MC which is used for licensing
If I'm not mistaken, the license usage logs are generated by the cluster master, so you could try splitting by host. Otherwise have a look in the raw data for that source if there's any other identif... See more...
If I'm not mistaken, the license usage logs are generated by the cluster master, so you could try splitting by host. Otherwise have a look in the raw data for that source if there's any other identifier in there that helps you tell things apart.
Invalid key in stanza [clustermaster:one] in /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf, line 7: master_uri (value: https://<address>:8089). Invalid key in stanza ... See more...
Invalid key in stanza [clustermaster:one] in /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf, line 7: master_uri (value: https://<address>:8089). Invalid key in stanza [clustermaster:one] in /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf, line 8: pass4SymmKey (value: ***************************************). Invalid key in stanza [clustermaster:one] in /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf, line 9: multisite (value: true)
| rex field=ports max_match=0 "(?<port>\d+)" | mvexpand port
Add-on you mentioned is deprecated best way would be to use Syslog.
You can use Splunk ODBC to fulfil this requirements, some references docs for you.
I have a sample data pushed to Splunk as below: Help me with Splunk query where I want only unique server names with final status as second column. compare both horizontally & vertically for each ser... See more...
I have a sample data pushed to Splunk as below: Help me with Splunk query where I want only unique server names with final status as second column. compare both horizontally & vertically for each server second column status, The condition is if any of the second column value is No for that server then consider No as final status for that server, if all the second column values are Yes for a Server, then consider that server final status as Yes. sample.csv: ServerName, Status, Department, Company, Location Server1,Yes,Government,DRDO,Bangalore Server1,No,Government,DRDO,Bangalore Server1,Yes,Government,DRDO,Bangalore Server2,No,Private,TCS,Chennai Server2,No,Private,TCS,Chennai Server3,Yes,Private,Infosys,Bangalore Server3,Yes,Private,Infosys,Bangalore Server4,Yes,Private,Tech Mahindra,Pune Server5,No,Government,IncomeTax India, Mumbai Server6,Yes,Private,Microsoft,Hyderabad Server6,No,Private,Microsoft,Hyderabad Server6,Yes,Private,Microsoft,Hyderabad Server6,No,Private,Microsoft,Hyderabad Server7,Yes,Government,GST Council,Delhi Server7,Yes,Government,GST Council,Delhi Server7,Yes,Government,GST Council,Delhi Server7,Yes,Government,GST Council,Delhi Server8,No,Private,Apple,Bangalore Server8,No,Private,Apple,Bangalore Server8,No,Private,Apple,Bangalore Server8,No,Private,Apple,Bangalore Note : The Department, Location & Company is same for any given server, Only Server status differs for each row of the server. I already have a query to get the Final Status for a server. Below query gives me unique Final status count of each server. | eval FinalStatus = if(Status="Yes", 1, 0) | eventstats min(FinalStatus) as FinalStatus by ServerName | stats min(FinalStatus) as FinalStatus by ServerName | eval FinalStatus = if(FinalStatus=1, "Yes", "No") | stats count(FinalStatus) as ServerStatus But what I want is I have a 3 dropdown on the top of the classic dashboard where 1. Department 2. Company 3. Location   - Dropdown list  Whenever I select a department, or Company or Location from any of the dropdowns, I need to get the Final Status count of each server based on any of the fields search. For say, If Bangalore is selected from Location dropdown, I need to get the final status count for a servers. if i search a Company DRDO from dropdown, I should be able to get final status count for servers based on company. I think its like | search department="$department$" Company="$Company$" Location="$Location$" Please help with spunk query.
Can you try to add SSL CA Chain to below location and see if it works?   1) /opt/splunk/lib/python3.7/site-packages/certifi And 2) /etc/apps/<Add-on_folder>/lib/certify
Can you try to add SSL CA Chain to below location and see if it works?   1) /opt/splunk/lib/python3.7/site-packages/certifi And 2) /etc/apps/<Add-on_folder>/lib/certify  
Splunk Stream utilities KVStore Services, 500 ERROR says that App is not able to communicate with KVStore. you can try to make fresh install it will solve this ERRORs and Problem you are facing.
Hey @bharat55 Where it is installed ? OnPrem or Cloud ? you should’ve filed support case for such issues.
Hi PickleRick,  If I understand correctly, I either do all the parsing on the UF, or I remove everything from the UF and move the parsing to the indexer (IDX)?
In short, download the codebase from Github as a zip, then you can either install it from the GUI or extract the zip to $SPLUNK_HOME/etc/apps and restart Splunk.
@ta1 There are installation instructions in the README.md file in the Github repo: https://github.com/plusserver/collectd/blob/master/README.md#installation