All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Splunk server is not the same as the Splunk software running on it. You can limit connectivity on the Splunk server using iptables/firewalld/Windows Firewall...
@MBristow7  I can see this app is compatible for Splunk Enterprise not for Splunk Cloud but this app has been archived. 
The Splunk App for Salesforce, is it compatible with Splunk Cloud?
The coldPath, thawedPath  etc are folders inside the index folder in Splunk and they can be hosted on the same volume (the default installation of Splunk creates these folders inside $SPLUNK_DB folde... See more...
The coldPath, thawedPath  etc are folders inside the index folder in Splunk and they can be hosted on the same volume (the default installation of Splunk creates these folders inside $SPLUNK_DB folder, so it does the same). For the purpose of better performance (and sometimes cost efficiency), we recommend have separate volume for hot/warm and cold bucket (keep faster disk for host/warm and slower/cheaper disk for cold buckets are they are searched less often).
Hi @fatsug  By default these are typically used: homePath = $SPLUNK_DB/<indexName>/db coldPath = $SPLUNK_DB/<indexName>/colddb thawedPath = $SPLUNK_DB/<indexName>/thaweddb Where hot/warm buckets a... See more...
Hi @fatsug  By default these are typically used: homePath = $SPLUNK_DB/<indexName>/db coldPath = $SPLUNK_DB/<indexName>/colddb thawedPath = $SPLUNK_DB/<indexName>/thaweddb Where hot/warm buckets are in homePath and cold buckets are in coldPath.  thawedPath is used for restoring buckets which have been frozen out to an external location using a coldToFrozenScript - Its a required setting even if you dont plan to freeze/restore data. In terms of home vs cold - A lot of customers may choose faster storage (such as SSD) on the homePath location, versus cheaper/slower storage for coldPath where older data is typically located. Your assumptions around using volumes are correct. You can specify multiple volumes on the same path, making sure that the combined maxVolumeDataSizeMB for your volumes doesnt exceed the disk size! You are essentially using it as a logical separation in the same physical space. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Hi @SplunkSN  Version 2.0.8 of the Splunk Add-on for AppDynamics is Splunk Cloud compatible, so you should be able to install this already on your Splunk Cloud stack. The latest version (3.1.2) was ... See more...
Hi @SplunkSN  Version 2.0.8 of the Splunk Add-on for AppDynamics is Splunk Cloud compatible, so you should be able to install this already on your Splunk Cloud stack. The latest version (3.1.2) was uploaded to SplunkBase 3 days ago on the 18th March so it is likely waiting for the AppInspect vetting process to complete before it is marked as being compatible for Splunk Cloud. I would be very surprised if it failed the Cloud vetting process as it is developed internally at Splunk   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi, One option would be to : 1 - Get rid of whatever data is being before the valid JSON For the example you posted, we can ask Splunk to delete this : Mar 18 02:32:19 MachineName python3[948]: D... See more...
Hi, One option would be to : 1 - Get rid of whatever data is being before the valid JSON For the example you posted, we can ask Splunk to delete this : Mar 18 02:32:19 MachineName python3[948]: DEBUG:root:... Dispatching: I'd use this in a props.conf : SEDCMD-removeheader=s/.*DEBUG:root:\.\.\. Dispatching: //g 2- Replace simple quotes with double quotes. Still in the props.conf : SEDCMD-replace_simple_quotes=s/'/"/gs 3- Activate the kv_mode=auto in order to extract the JSON fields: KV_MODE=json The props.conf could look like this : [custom_sourcetype] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 category=Custom pulldown_type=true SEDCMD-removeheader=s/.*DEBUG:root:\.\.\. Dispatching: //g SEDCMD-replace_simple_quotes=s/'/"/g KV_MODE=json   It works in my lab. Best, Ch.
Hi @BRFZ  If your data is landing in Splunk then the next thing you'll probably want to start looking at is ensuring that it is CIM compliant and then starting to enable/create Rules, based on your ... See more...
Hi @BRFZ  If your data is landing in Splunk then the next thing you'll probably want to start looking at is ensuring that it is CIM compliant and then starting to enable/create Rules, based on your requirements. To do this properly you want to make sure it is planned out well and have clear requirements, rather than enabling lots of Rules sporadically! Some good resources to check out are: Splunk Lantern - https://lantern.splunk.com/Security/Getting_Started/Getting_started_with_ES Splunk Security Essentials - https://splunkbase.splunk.com/app/3435 Splunk ES 101 video - https://www.youtube.com/watch?v=Euas6lCK-LE Splunk ES Certified Admin training path - https://www.splunk.com/en_us/training/certification-track/splunk-es-certified-admin.html Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @molla  This isnt something you can do with a cluster map - although a Choropleth map does highlight the regions - but - the bins that it groups your stats into is quite cumbersome to manage. It ... See more...
Hi @molla  This isnt something you can do with a cluster map - although a Choropleth map does highlight the regions - but - the bins that it groups your stats into is quite cumbersome to manage. It might work well for what you need though? Have you already tried a Choropleth map? Another option might be to use https://splunkbase.splunk.com/app/5166 Simple Map Viz app which looks like it should do what you are looking for. Please note that this is only for an XML (Not dashboard studio) dashboard. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi All, We have requirement to onboard the Infrastructure metrics (CPU, Memory and Disk ) monitored using Appdyanics to Splunk. Both Splunk and Appdyanmics are on cloud.  I see SPLUNKtechnology add... See more...
Hi All, We have requirement to onboard the Infrastructure metrics (CPU, Memory and Disk ) monitored using Appdyanics to Splunk. Both Splunk and Appdyanmics are on cloud.  I see SPLUNKtechnology addon for Appdyanmics  in splunkbase but is not supported for Splunk cloud version. is ther anyother way we can onboard such metrics to SPlunk   Thanks  Happy Splunking
Without a tiered storage model it seems like there would be little argument for using cold/frozen storage. Except potentially if additional compression helps save space. If not, using only a homePath... See more...
Without a tiered storage model it seems like there would be little argument for using cold/frozen storage. Except potentially if additional compression helps save space. If not, using only a homePath in indexes.conf would seem like it would make all data readily accesible as hot/warm. However, checking the documentation there seems to be three paths for indexes that are reqired for Splunkd to start being homePath, coldPatch and thawedPath. (indexes.conf - Splunk Documentation) So using a single disk/volume/mount, what does the inputs.conf look like? Should the same path just be set for all three? Making sure that maxVolumeDataSizeMB adds up to the total volume available on /data/splunk/warm. [volume:storage] path = /data/splunk/warm/ # adjust when correct disk is mounted maxVolumeDataSizeMB = 2800000 ... ... [volume:_splunk_summaries] path = /data/splunk/warm/ # ~ 200GB maxVolumeDataSizeMB = 200000 ... ... [main] homePath = volume:storage/defaultdb/db coldPath = volume:storage/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb [history] homePath = volume:storage/historydb/db coldPath = volume:storage/historydb/colddb thawedPath = $SPLUNK_DB/historydb/thaweddb [summary] homePath = volume:storage/summarydb/db coldPath = volume:storage/summarydb/colddb thawedPath = $SPLUNK_DB/summarydb/thaweddb ... ... [windows] homePath = volume:storage/windows/db coldPath = volume:storage/windows/colddb summaryHomePath = volume:storage/windows/summary thawedPath = $SPLUNK_DB/windows/thaweddb tstatsHomePath = volume:_splunk_summaries/windows/datamodel_summary frozenTimePeriodInSecs = 63072000 [linux] homePath = volume:storage/linux/db coldPath = volume:storage/linux/colddb summaryHomePath = volume:storage/linux/summary thawedPath = $SPLUNK_DB/linux/thaweddb tstatsHomePath = volume:_splunk_summaries/linux/datamodel_summary frozenTimePeriodInSecs = 63072000   I'm assuming this would work, right? Though as it seems that Splunk requires, and does make use of "cold" and "thawed" anyway, does it make more sence to just partition mounts for warm and cold separately anyway?  [volume:warm] path = /data/splunk/warm/ # adjust when correct disk is mounted maxVolumeDataSizeMB = 500000 [volume:cold] path = /data/splunk/warm/ # adjust when correct disk is mounted maxVolumeDataSizeMB = 2500000 ... ... [volume:_splunk_summaries] path = /data/splunk/warm/ # ~ 200GB maxVolumeDataSizeMB = 200000 ... ... [main] homePath = volume:warm/defaultdb/db coldPath = volume:cold/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb [history] homePath = volume:warm/historydb/db coldPath = volume:cold/historydb/colddb thawedPath = $SPLUNK_DB/historydb/thaweddb [summary] homePath = volume:warm/summarydb/db coldPath = volume:cold/summarydb/colddb thawedPath = $SPLUNK_DB/summarydb/thaweddb ... ... [windows] homePath = volume:warm/windows/db coldPath = volume:cold/windows/colddb summaryHomePath = volume:warm/windows/summary thawedPath = $SPLUNK_DB/windows/thaweddb tstatsHomePath = volume:_splunk_summaries/windows/datamodel_summary frozenTimePeriodInSecs = 63072000 [linux] homePath = volume:warm/linux/db coldPath = volume:warm/linux/colddb summaryHomePath = volume:warm/linux/summary thawedPath = $SPLUNK_DB/linux/thaweddb tstatsHomePath = volume:_splunk_summaries/linux/datamodel_summary frozenTimePeriodInSecs = 63072000 Does it matter and what would be "best praxis". 
Hi there, Splunk Enterprise Security (ES) is a sort of extra layer to Splunk Enterprise, and it brings you more integrated possibilities : More possibilities when it come to create Alerts (Called ... See more...
Hi there, Splunk Enterprise Security (ES) is a sort of extra layer to Splunk Enterprise, and it brings you more integrated possibilities : More possibilities when it come to create Alerts (Called Notable in ES. [this name must have changed in version 8 though]) An Alert Managment system (Incident Review) which allows a team to watch alerts and investigate them IOC detection and managment system Tons of useful dashboards All of that heavely relies on, Your data : If the data you're already ingesting into Splunk Enterprise is CIM compliant Documentation : https://docs.splunk.com/Documentation/CIM/6.0.2/User/Overview How well this data is mapped to Splunk Datamodels Everything is well explained in this page : https://docs.splunk.com/Documentation/ES/8.0.2/Install/DataSourcePlanning Identities (login accounts) and Assets (hosts) : You must give to Splunk ES a list of : identities of account names of the users of your organization hostnames / IP adresses of the assets of your organization This process is explained on this page : https://docs.splunk.com/Documentation/ES/8.0.2/Admin/VerifyAssetIdentityData   Configuring ES to its full potential can take some time and energy but it worth it. Best, Ch.
Hello, I am currently working on configuring Splunk Enterprise Security app, I already have data flowing into Splunk Enterprise, but I'm not sure how to properly configure the data inputs for the ap... See more...
Hello, I am currently working on configuring Splunk Enterprise Security app, I already have data flowing into Splunk Enterprise, but I'm not sure how to properly configure the data inputs for the app. Could anyone guide me on how to configure the data sources in Enterprise Security app ? If there is any specific documentation on this, I would appreciate it if you could provide it.
Try something like this - note the use of empty tokens rather than "true" and "false" (I have added done handlers to show when the searches complete - which they don't if they are still waiting for i... See more...
Try something like this - note the use of empty tokens rather than "true" and "false" (I have added done handlers to show when the searches complete - which they don't if they are still waiting for input from the unset tokens). <form version="1.1" theme="light"> <label>Change on Condition</label> <fieldset submitButton="false"> <input type="dropdown" token="spliterror_1" searchWhenChanged="true"> <label>Splits</label> <choice value="*">All</choice> <choice value="false">Exclude</choice> <choice value="true">Splits Only</choice> <prefix>isSplit="</prefix> <suffix>"</suffix> <default>*</default> <change> <condition label="All"> <set token="ShowTrue"></set> <set token="ShowFalse"></set> </condition> <condition label="Exclude"> <set token="ShowFalse"></set> <unset token="ShowTrue"></unset> </condition> <condition label="Splits Only"> <unset token="ShowFalse"></unset> <set token="ShowTrue"></set> </condition> </change> </input> </fieldset> <row> <panel> <html> <p>Exclude completed $ExcludeComplete$</p> <p>Split only completed $SplitOnlyComplete$</p> </html> </panel> </row> <row> <panel depends="$ShowFalse$"> <table> <title>Exclude</title> <search> <done> <eval token="ExcludeComplete">strftime(time(),"%F %T")</eval> </done> <query>index=_internal $ShowFalse$ | stats count by component</query> <earliest>-15m</earliest> <latest>now</latest> </search> <option name="refresh.display">progressbar</option> </table> </panel> <panel depends="$ShowTrue$"> <table> <title>Split only</title> <search> <done> <eval token="SplitOnlyComplete">strftime(time(),"%F %T")</eval> </done> <query>index=_internal $ShowTrue$ | stats count by component</query> <earliest>-15m</earliest> <latest>now</latest> </search> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>  
Hi Splunkers,  I would like to display a count divided by several locations on a map. On the map, I would like only the outline of the country to be highlighted. Is this possible with the cluster m... See more...
Hi Splunkers,  I would like to display a count divided by several locations on a map. On the map, I would like only the outline of the country to be highlighted. Is this possible with the cluster map view?    
Hi @Fr3nchee , in addition to the perfect information from @kiran_panchavat , remember that logstash modifies the original logs putting the original log in a field of the JSON called message. This ... See more...
Hi @Fr3nchee , in addition to the perfect information from @kiran_panchavat , remember that logstash modifies the original logs putting the original log in a field of the JSON called message. This means that all the add-on that you can find in splunkbase don't work. You have two choices: restore the original log format extracting from the JSON file the message field and putting it in _raw, create your own parsers. I hint the first solution, because the second one is very long to implement. even if isn't so easy to implement because you must use INGEST_EVAL command and json_extract funcion, for this I hint to engage someone that already experienced this job. At least to prepare the job. Ciao. Giuseppe
@Fr3nchee  Before Logstash can send logs, Splunk needs to be configured to receive them.   Open your Splunk instance and Create a new HEC data input.  Go to Settings > Data Inputs > HTTP Event ... See more...
@Fr3nchee  Before Logstash can send logs, Splunk needs to be configured to receive them.   Open your Splunk instance and Create a new HEC data input.  Go to Settings > Data Inputs > HTTP Event Collector.  Click New Token and give it a name  Select the index where you want the data to be stored (e.g., "logstash"), You have to create the index on HF and also in the indexers.  Copy the token for later use.  Refer this for more info:  Format events for HTTP Event Collector - Splunk Documentation Follow the below documentation for more information: GitHub - bonifield/logstash-to-splunk: writeup about sending Logstash data to Splunk using the HTTP Event Collector  Verify data in Splunk : Go to search head and search for your data you specified eg: index=logstash  
Hello all, So I'm very new to Splunk, like I've been playing around with it for less than 3 months.  I have been tasked with sending logs from Logstash into Splunk, however, I have no idea where to ... See more...
Hello all, So I'm very new to Splunk, like I've been playing around with it for less than 3 months.  I have been tasked with sending logs from Logstash into Splunk, however, I have no idea where to start with this.  I've been looking online, but the information I find is very confusing. Does someone have some kind of guide that explains how to get data from Logstash to Splunk in detail, including what files need to be configured in Logstash? Any help would be appreciated. Thanks
@Haleb  Hey, take a look at this documentation, I think it covers the same issue you’re running into : Solved: KV Store Failing to start 9.4 .1 - Splunk Community   
Splunk doesn’t do IP-based restrictions natively, it’s all user-to-role mapping.. They’d need a reverse proxy like NGINX to restrict by IP, but that’s outside Splunk itself. Mixing the two is a categ... See more...
Splunk doesn’t do IP-based restrictions natively, it’s all user-to-role mapping.. They’d need a reverse proxy like NGINX to restrict by IP, but that’s outside Splunk itself. Mixing the two is a category error.