All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

To use a lookup to enrich a search, the lookup needs to exist as a lookup on the Search Head A lookup on a heavy forwarder is not going to be available at search time.  What you need to do is get... See more...
To use a lookup to enrich a search, the lookup needs to exist as a lookup on the Search Head A lookup on a heavy forwarder is not going to be available at search time.  What you need to do is get a copy of the lookup on the SH. The easiest (imo) option is to index the lookup file on the HF - simply define it as an input on the HF and have Splunk monitor it for changes. You can send this to any index, but lets assume you create and use one called "lookups_index" and sourcetype "my_hf_lookup" On your search head, you can now create a lookup-generating search: Depending on what your lookup contains (dates, product_ids, error codes) you would create a search like: index=lookups_index soucetype=my_hf_lookup |dedup product_code |table product_code product_description product_price |outputlookup my_sh_lookup.csv I like to name these something like: "LOOKUPGEN-my_sh_lookup.csv" You can then schedule that to run once a day/week/hour (depending on your anticipated lookup change frequency) You can then use: |lookup my_sh_lookup.csv product_code OUTPUT product_name product_price In your searches - Although I find it better practice to actually create a lookup definition and use that 
Indeed, you're right. | makeresults | eval a=mvappend("","") | eval c=mvcount(a) And we have c=2.  That's funny. One should really deliberately choose some non-existant field.
thanks for your reply I will contact the support for this bug
yeah, but for now just want to know if it able to disable from the stream conf. I know its better for the full visibility, but again beside because the license limits, also want to know the posibilit... See more...
yeah, but for now just want to know if it able to disable from the stream conf. I know its better for the full visibility, but again beside because the license limits, also want to know the posibilities.
i'll try this one next.
Hope this finds you helpful. https://splunk.my.site.com/customer/s/article/When-creating-an-incident-from-ServiceNow-short-description-field-is-limited-to-80-characters
Hi All - A new version of the IPQS Splunk plugin is now available, which fixes the past issues. https://splunkbase.splunk.com/app/5423 Please let us know if you encounter any new errors, we'll be h... See more...
Hi All - A new version of the IPQS Splunk plugin is now available, which fixes the past issues. https://splunkbase.splunk.com/app/5423 Please let us know if you encounter any new errors, we'll be happy to investigate. You can also message support@ipqualityscore.com.
Empty strings are counted | makeresults | eval a="a", b="b", d="" | eval a=mvappend(a, "", b, d) | eval c=mvcount(a) and the foreach would result in an empty row | foreach * [ | eval a=mva... See more...
Empty strings are counted | makeresults | eval a="a", b="b", d="" | eval a=mvappend(a, "", b, d) | eval c=mvcount(a) and the foreach would result in an empty row | foreach * [ | eval a=mvappend(a, if("<<FIELD>>"="location", "", json_object("location",'location',"name","<<FIELD>>","value",'<<FIELD>>'))) ] and I'd always made the assumption that using null was because mvappend was a bit strange, but you're right it is just another field - have to go check some old searches
Hi @elend  Splunk stream supports using Berkeley Packet Filter strings to filter out traffic in your streamfwd.conf file. Something like: [streamfwd] streamfwdcapture.0.filter = not udp For more ... See more...
Hi @elend  Splunk stream supports using Berkeley Packet Filter strings to filter out traffic in your streamfwd.conf file. Something like: [streamfwd] streamfwdcapture.0.filter = not udp For more details check out https://docs.splunk.com/Documentation/StreamApp/8.1.0/DeployStreamApp/ForwarderParameters#:~:text=N%3E.port%20%3D%20443-,Use%20streamfwdcapture%20to%20specify%20network%20interfaces,-By%20default%2C    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Ok, a bit more information. I'm admittedly running a slightly older version of Splunk (9.2.6); I'm not sure if this issue is fixed in subsequent versions. It seems like the mobile alert visualizatio... See more...
Ok, a bit more information. I'm admittedly running a slightly older version of Splunk (9.2.6); I'm not sure if this issue is fixed in subsequent versions. It seems like the mobile alert visualization simply doesn't populate the token properly. I augmented my dashboard with a search block like this: <query>| makeresults | eval formToken="$form.netidToken$", notFormToken="$netidToken$" | table formToken notFormToken</query> In the web, both of those values get populated from the input field with the 'netidToken' setting. In mobile, the form.netidToken call doesn't work at all, and the netidToken value is initially populated by the literal text "$netidToken$". After manually entering a value into the tokenized input, it updates correctly, but never gets populated when being called from the alert itself.
Hi @ez-secops-awn  I would suggest reaching out directly to VirusTotal who created this app as they may be able to add it as a future feature request. Their contact details are contact@virustotal.co... See more...
Hi @ez-secops-awn  I would suggest reaching out directly to VirusTotal who created this app as they may be able to add it as a future feature request. Their contact details are contact@virustotal.com  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Nice thanks @PickleRick  - I rarely use single quotes with $ in so had assumed incorrectly it was the same as double quotes.  Every day is a school day Will
Hi @Mirza_Jaffar1  Did you copy the $SPLUNK_HOME/etc/auth/splunk.secret file from the old to the new server? This is the file that Splunk uses for encrypting sensitive configuration/secrets and is u... See more...
Hi @Mirza_Jaffar1  Did you copy the $SPLUNK_HOME/etc/auth/splunk.secret file from the old to the new server? This is the file that Splunk uses for encrypting sensitive configuration/secrets and is unique to each server, unless copied. Regarding the permissions issues, did you manage to resolve these? Who are the the files/folders owned by and what user is the Splunk service running as?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Is there any possibility that there are field named sourcetype in your event?
I did check but nothing seems worked because chmod 770 is what used but chmod 550 should work! This something when usually occurs with permission. Is there any other chmod numeric(550.775,7770) whic... See more...
I did check but nothing seems worked because chmod 770 is what used but chmod 550 should work! This something when usually occurs with permission. Is there any other chmod numeric(550.775,7770) which provide same permission to the root and user?
How you are creating those kvstore lookups in HF? What is the reason to use kvstore instead of csv file or modular input to send those directly into indexers?
@Mirza_Jaffar1 Ok, so you tried to copy over the contents of old server's config to a new one, right? There were "some permission issues", right? Did you bother to check what kind of issues they were... See more...
@Mirza_Jaffar1 Ok, so you tried to copy over the contents of old server's config to a new one, right? There were "some permission issues", right? Did you bother to check what kind of issues they were? Did you fix them?
Hi @Mirza_Jaffar1  Lets strip out all those comments, it looks like your applied config is: [tcpout] defaultGroup = primary_indexers [tcpout:primary_indexers] server = server_one:9997, server_two:... See more...
Hi @Mirza_Jaffar1  Lets strip out all those comments, it looks like your applied config is: [tcpout] defaultGroup = primary_indexers [tcpout:primary_indexers] server = server_one:9997, server_two:9997 In theory this should probably work, but takes a number of assumptions. I dont think the lack of a indexAndForward setting will be affecting this because either way it should forward the data, so I wont focus on that. The first thing to check is on one of the hosts that arent sending their internal logs, check $SPLUNK_HOME/var/log/splunk/splunkd.log for any errors relating to output directly on the server. Try the keyword "tcpoutputfd" - Do you see any failures/errors?  Can you confirm that you can connect to server_one and server_two from your hosts on port 9997?  nc -vz -w1 server_one 9997 This will prove that the connectivity can be established correctly and that your indexers are listening. Are there any firewalls between your other servers and the indexers? Lastly, what is the inputs.conf configuration on your indexers? Please check with btool - are using any custom SSL certificates or requiring client certs? $SPLUNK_HOME/bin/splunk btool input list --debug splunktcp  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Actually, you need to escape the dollar sign if you are not using single quotes in most shells. If you are using single quotes for strings you should not escape the contents. $ echo \$ $ :/ $ ech... See more...
Actually, you need to escape the dollar sign if you are not using single quotes in most shells. If you are using single quotes for strings you should not escape the contents. :/ $ echo \$ $ :/ $ echo "\$" $ :/ $ echo '$' $ :/ $ echo '\$' \$
What is your architecture and how you have configured IA?