All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I tried this on a server and found the opposite to be true. It was setting index across all stanzas at leas the ones set in etc\system\default\inputs.conf for the forwarder.
How can you query an index to find out the data types of the fields and any attributes that describe the field?  from a data governance perspective to document.  Apologies... new to this...  I know t... See more...
How can you query an index to find out the data types of the fields and any attributes that describe the field?  from a data governance perspective to document.  Apologies... new to this...  I know that you can query the index and fields would be listed and see some of the metadata type info but it's not exportable..i  would like to query and export that info.  thx  
@tolgaakkapulu  Im afraid I'm a little stuck here too...as it sounds like its configured correctly and you've confirmed the data coming back, the backfill and connection etc. If you're feeling adve... See more...
@tolgaakkapulu  Im afraid I'm a little stuck here too...as it sounds like its configured correctly and you've confirmed the data coming back, the backfill and connection etc. If you're feeling adventurous you could modify the Python in the app to add more logging, to see if that helps! In $SPLUNK_HOME/etc/apps/TA-otx/bin/input_module_otx.py find the following section of code: response = helper.send_http_request( 'https://otx.alienvault.com/api/v1/pulses/subscribed', 'GET', parameters = {'modified_since' : since }, headers = { 'X-OTX-API-KEY' : api_key }, verify=True, use_proxy=True ) response.raise_for_status() pulses = response.json()['results'] Replace it with: response = helper.send_http_request( 'https://otx.alienvault.com/api/v1/pulses/subscribed', 'GET', parameters = {'modified_since' : since }, headers = { 'X-OTX-API-KEY' : api_key }, verify=True, use_proxy=True ) helper.log_info("modified_since: %s" % str(since)) response.raise_for_status() respData = response.json() helper.log_info("Response from request") helper.log_info(respData) pulses = respData['results'] Then disable and re-enable the input and check the logs to see if it gives any more insight!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Incase it helps @tolgaakkapulu you do *not* need to create the index on the HF - this app has allows you to type in a chosen index name rather than select from a static list.  Did this answer help... See more...
Incase it helps @tolgaakkapulu you do *not* need to create the index on the HF - this app has allows you to type in a chosen index name rather than select from a static list.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Yes. Only if you're using GUI to configure the input. Even then it's not really necessary. You can define input to set any destination index and then manually edit input.conf (or use REST to update t... See more...
Yes. Only if you're using GUI to configure the input. Even then it's not really necessary. You can define input to set any destination index and then manually edit input.conf (or use REST to update the input).
Thanks for the feedback.  I was able to make this work in simple XML with what you provided.  As I started learning Splunk, I have been working solely with Dashboard Studio but quickly realized some ... See more...
Thanks for the feedback.  I was able to make this work in simple XML with what you provided.  As I started learning Splunk, I have been working solely with Dashboard Studio but quickly realized some of the queries I am wanted to run only run in simple XML.  Again, appreciate the assistance.
@PickleRickThe index needs to be created on the Heavy Forwarder (HF) so it can be selected in the data input configuration. There's no requirement to store data on the HF unless indexAndForward=true ... See more...
@PickleRickThe index needs to be created on the Heavy Forwarder (HF) so it can be selected in the data input configuration. There's no requirement to store data on the HF unless indexAndForward=true is explicitly set. This is mainly for naming consistency within the GUI.
I totally agree with @PickleRick that you needs some local Splunk Partner or sales engineer to go through your current setup and how to proceed with it! To help you we need much more information and a... See more...
I totally agree with @PickleRick that you needs some local Splunk Partner or sales engineer to go through your current setup and how to proceed with it! To help you we need much more information and also we must see your whole environment, use cases and also understand your business to make any real suggestions to you. If/when you want know more about SmartStore I suggest that you join Splunk’s slack and read and asking more there. https://splunkcommunity.slack.com/archives/CD6JNQ03F
Hmm... There is not a single mention of this variable in https://splunk.github.io/splunk-ansible/ADVANCED.html#inventory-script What is it supposed to do?
I worked with Splunk Support and it turns out there is a known issue (a regression with one of their python libs). You can work around this by setting the environment variable ENABLE_TCP_MODE to true... See more...
I worked with Splunk Support and it turns out there is a known issue (a regression with one of their python libs). You can work around this by setting the environment variable ENABLE_TCP_MODE to true either at the docker run command line (-e ENABLE_TCP_MODE=true) or in your compose file (be sure if using list context to leave true unquoted).
Why would you create an index on the HF? This is plain wrong. A HF is supposed to receive/fetch the data and forward it to the next tier (usually indexer but can be an intermediate forwarder). It's n... See more...
Why would you create an index on the HF? This is plain wrong. A HF is supposed to receive/fetch the data and forward it to the next tier (usually indexer but can be an intermediate forwarder). It's not supposed to do local indexing.
1. The addon is not Splunk-supported so you might need to contact the authors directly. 2. We don't know your architecture and we don't know where in your environment you installed the addon 3. We ... See more...
1. The addon is not Splunk-supported so you might need to contact the authors directly. 2. We don't know your architecture and we don't know where in your environment you installed the addon 3. We don't know _exactly_ what you did and on which component. "completing all the installation steps" is kinda vague.
@tolgaakkapulu  We need to install the add-on on the heavy forwarder to configure the data inputs. This allows the heavy forwarder to collect and forward data to the indexers. The same add-on shou... See more...
@tolgaakkapulu  We need to install the add-on on the heavy forwarder to configure the data inputs. This allows the heavy forwarder to collect and forward data to the indexers. The same add-on should be installed on the search heads to enable search-time parsing. Most add-ons should be installed on BOTH the HF/indexer and the search head.  That's because they often have some properties that apply at index time and others that apply at search time. Where to install Splunk add-ons - Splunk Documentation NOTE: If your first Splunk Enterprise instance is a Heavy Forwarder (HF), install the add-on there for data parsing and configuring data inputs.  And if there's no Heavy Forwarder (HF) in your environment and you need to configure the add-on's data input, install the add-on on the search head and set up the data input there.
Hi @bpenny  You should be able to do a simple lookup for this, something like this: | lookup typesEnrich.csv type AS msg.message_set{}.type OUTPUT typeDescription   To demonstrate this I've creat... See more...
Hi @bpenny  You should be able to do a simple lookup for this, something like this: | lookup typesEnrich.csv type AS msg.message_set{}.type OUTPUT typeDescription   To demonstrate this I've created a sample lookup file: | makeresults count=1 | eval type=1, typeDescription="Type A" | append [ | makeresults count=1 | eval type=2, typeDescription="Type B" ] | append [ | makeresults count=1 | eval type=3, typeDescription="Type C" ] | append [ | makeresults count=1 | eval type=4, typeDescription="Type D" ] | append [ | makeresults count=1 | eval type=5, typeDescription="Type E" ] | append [ | makeresults count=1 | eval type=6, typeDescription="Type F" ] | table type typeDescription | outputlookup typesEnrich.csv Then using some sample data we can emulate your use-case (hopefully!) | makeresults | eval json_data = "{\"msg\":{\"message_set\": [{\"type\": 1}, {\"type\": 2}, {\"type\": 4}]}}" | eval _raw=json_extract(json_data,"") | table _raw | spath input=_raw | lookup typesEnrich.csv type AS msg.message_set{}.type OUTPUT typeDescription Which gives the following:    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Check out the lookup function.  It should do what you want and put the results in a separate JSON array.  
Great answer. Thank you. Yeah, CIP is such a pain in the butt but needed. 
Hi @mooredaCIP  If you are referring to the underlying Splunk server software itself, then No. Splunk ESCU only contains knowledge objects (configuration) and does not change any binaries relating t... See more...
Hi @mooredaCIP  If you are referring to the underlying Splunk server software itself, then No. Splunk ESCU only contains knowledge objects (configuration) and does not change any binaries relating to Splunk server. In relation to the contents within ESCU which get updated - the version updates could modify content within the ESCU app in the "default" folder, so any changes or modifications to ESCU content applied in Enterprise Security will not be affected - as these changes are applied to the local directory which should not be affected by an update. Generally when utilising Splunk ESCU you would clone a detection which would mean you take a copy of the ESCU knowledge object *at that point in time* - so again, these cloned searches would not be affected.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing    
Good day. I work in a heavily regulated critical infrastructure environment. Our compliance change management requires us to consider baseline changes in a server system. Do Content Updates alter the... See more...
Good day. I work in a heavily regulated critical infrastructure environment. Our compliance change management requires us to consider baseline changes in a server system. Do Content Updates alter the baseline for the Splunk server onsite? We do not use cloud. Are there hash values that are checked when pushing the updates? We need to have the updates but I can't just open a whole change management process each time this has to be updated. I need assurance that critical infrastructure is considered in ES content updates. Thanks in advance. 
We have a use case where some JSON being ingested into Splunk contains a list of values like this: "message_set": [ { "type": 9 }, { ... See more...
We have a use case where some JSON being ingested into Splunk contains a list of values like this: "message_set": [ { "type": 9 }, { "type": 22 }, { "type": 15 }, ... ], That list has an arbitrary length, so it could contain anywhere from one up to around 30 "type" values. Splunk is parsing the JSON just fine, so these fields can be referenced as "message_info.message_set{}.type" in searches. I'd like to set up an inputlookup that maps these numerical values to more descriptive text. Is there a way to apply an inputlookup across an entire list of arbitrary size like this, or would I need to explicitly add an inputlookup definition for each individual index in the list? I'd ultimately like to add these as LOOKUP settings in the sourcetype for this data so that they're automatically applied for all searches.
This is wrong syntax. You can't search from index and then do makeresults.