All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I would like to automatically extract fields using props.conf. When there is a pattern like the one below, what I want to extract is each file name. attach_filename:[""] contains one or two file nam... See more...
I would like to automatically extract fields using props.conf. When there is a pattern like the one below, what I want to extract is each file name. attach_filename:[""] contains one or two file names. How can I extract all file names?   "attach_filename":["image.png","GoT.S7E2.BOTS.BOTS.BOTS.mkv.torrent"] "attach_filename":["image.png","Office2016_Patcher_For_OSX.torrent"] "attach_filename":["image.png"] "attach_filename":["Saccharomyces_cerevisiae_patent.docx"]   field extract will be store file_name   file_name : image.png,  Saccharomyces_cerevisiae_patent.docx,  GoT.S7E2.BOTS.BOTS.BOTS.mkv.torrent, Office2016_Patcher_For_OSX.torrent  
have one forwarder and three indexer servers. Each indexer server holds the indexes index-card,  index=bank, index=error.
I can't answer your specific question other than to ask if you have inputs or tokens, which are discussed from other users who have similar questions. However, there is a relatively new app https:/... See more...
I can't answer your specific question other than to ask if you have inputs or tokens, which are discussed from other users who have similar questions. However, there is a relatively new app https://splunkbase.splunk.com/app/7171 which may be of interest, as it will give you more control over the output.  
If you use Splunk Enterprise or SplunkCloud, you can guard against loss of data when forwarding by enabling the indexer acknowledgment capability. With indexer acknowledgment, the forwarder will rese... See more...
If you use Splunk Enterprise or SplunkCloud, you can guard against loss of data when forwarding by enabling the indexer acknowledgment capability. With indexer acknowledgment, the forwarder will resend any data that the receiver does not acknowledge as "received". You enable indexer acknowledgment on the forwarder, in the outputs.conf file. See how acknowledgement works. See how acknowledgement failure is handled. However if the forwarder is restarted/stopped while waiting for indexer acknowledgment, in most of the cases unacknowledged data is not resend upon forwarder restart/start. That is because indexer acknowledgment is just an agreement  between the output processor and the target server.  There are 4 major input types: File inputs(monitor/batch mode) Modular inputs Network inputs(TCP/UDP) HTTP inputs(http receiver endpoints/http event collector) Only the file input in the monitor mode can resend data if the forwarder is restarted/stopped while waiting for indexer acknowledgment.   Acknowledgement is sent back to the forwarder after replication factor is met. That means for rep factor 3, source indexer waits on acknowledgement from 2 replication target indexers. Inputs on forwarders are not aware of the indexer acknowledgement process. Latency increases when the target server is not an indexer as the intermediate tier will wait for acknowledgement before acknowledging back to edge forwarder. For more information, see the outputs.conf spec file.
If you have access to the _audit index, run this search index=_audit host="*" sourcetype=audittrail action=search (info=granted OR info=completed OR info=canceled) provenance=UI:dashboard* | rex fie... See more...
If you have access to the _audit index, run this search index=_audit host="*" sourcetype=audittrail action=search (info=granted OR info=completed OR info=canceled) provenance=UI:dashboard* | rex field=provenance "UI:[Dd]ashboard:(?<dashboard>.*") | timecart count by dashboard Note that if the provenance has D for dashboard, it is a classic Simple XML dashboard studio dashboard whereas if it's a lower case d, it is a dashboard studio dashboard.  
I am also experiencing this issue, though it occurred after moving from Ubuntu to a stigged RHEL 9.  I am curious on what the fix was for you because I've tried everything I can think of.
| rex "requestBody (?<requestBody>\{.*\})$" | spath input=requestBody source.collaborators.entries{}.accessible_by.name output=accessible_by.name | spath input=requestBody source.name | where 'source... See more...
| rex "requestBody (?<requestBody>\{.*\})$" | spath input=requestBody source.collaborators.entries{}.accessible_by.name output=accessible_by.name | spath input=requestBody source.name | where 'source.name'="My recordings"
You'll need to provide more information - are you exporting to CSV when you say null values, do you mean it has the word NULL or is there just an empty field in the exported data (i.e. ,, in CSV) ... See more...
You'll need to provide more information - are you exporting to CSV when you say null values, do you mean it has the word NULL or is there just an empty field in the exported data (i.e. ,, in CSV) how are you exporting the data, is it just using the export button underneath the search or are you trying to export using the Export button at the top of the dashboard  
how can i troubleshoot when using a dashboard to export data, the data exported has numerous NULL values where there should be actual data in the table output in splunk    ... See more...
how can i troubleshoot when using a dashboard to export data, the data exported has numerous NULL values where there should be actual data in the table output in splunk    
Try launching the search with the token values filled in while the dropdown is set to a non-ALL value. You could also post a sanitized version of your search so that we can see if something looks bro... See more...
Try launching the search with the token values filled in while the dropdown is set to a non-ALL value. You could also post a sanitized version of your search so that we can see if something looks broken.
are you file monitoring? If you are, the issue has to do with the following: When using File monitoring input or folder monitoring input do not use recursive search or three dot notations (...) ins... See more...
are you file monitoring? If you are, the issue has to do with the following: When using File monitoring input or folder monitoring input do not use recursive search or three dot notations (...) instead prefer to use non-recursive search or asterisk notation (*). Example: [monitor:///home/*/.bash_history] is much better programmatically then [monitor:///home/.../.bash_history] When you must compulsorily use recursive search, then: Make sure no. of total files under the main root directory you are searching is not huge. Make sure there are no cyclic links that could cause Splunk to go into an infinite loop.
I want to start of by saying I am extremely new to splunk, so please bear with me, I'm not sure at all if I'm on the right track so please feel let me know if I need to try something else. I have t... See more...
I want to start of by saying I am extremely new to splunk, so please bear with me, I'm not sure at all if I'm on the right track so please feel let me know if I need to try something else. I have two Cisco ASA5506s are used as firewalls. Searching for either of their hostnames only yields results for about 17 days or so. So if today is the 1st day, it will overwrite the 17th day to record tomorrows logs. Since all I was doing was just trying to get a total view of how many total entries it's pulling from all indexes I wasn't sure which index could be the reason why it's not logging past 17 days. Poking around I found that the _syslog and _metrics indexes both only had logs 14-15 days old. So that lead me to modify the indexes.conf file, however this did not help log the firewalls past 17 days. What else should I be looking for? These devices see millions of hits daily, so that could possibly be contribiting to this as well.     Previous: Indexes.conf [default] serviceSubtaskTimingPeriod = 30 serviceInactiveIndexesPeriod = 60 enableRealtimeSearch = true timePeriodInSecBeforeTsidxReduction = 604800 serviceMetaPeriod = 25 defaultDatabase = main rotatePeriodInSecs = 60 rtRouterThreads = 0 enableTsidxReduction = false maxHotIdleSecs = 0 bucketRebuildMemoryHint = auto suspendHotRollByDeleteQuery = false maxHotSpanSecs = 7776000 suppressBannerList = maxBucketSizeCacheEntries = 0 hotBucketTimeRefreshInterval = 10 maxHotBuckets = 3 processTrackerServiceInterval = 1 maxDataSize = auto maxRunningProcessGroups = 8 minRawFileSyncSecs = disable enableDataIntegrityControl = false minStreamGroupQueueSize = 2000 maxMetaEntries = 1000000 throttleCheckPeriod = 15 tstatsHomePath = volume:_splunk_summaries\$_index_name\datamodel_summary tsidxReductionCheckPeriodInSec = 600 maxBloomBackfillBucketAge = 30d datatype = event syncMeta = true partialServiceMetaPeriod = 0 frozenTimePeriodInSecs = 188697600 maxGlobalDataSizeMB = 0 quarantinePastSecs = 77760000 compressRawdata = true coldToFrozenScript = coldPath.maxDataSizeMB = 0 enableOnlineBucketRepair = true repFactor = 0 rtRouterQueueSize = 10000 maxTimeUnreplicatedWithAcks = 60 assureUTF8 = false maxTimeUnreplicatedNoAcks = 300 rawChunkSizeBytes = 131072 memPoolMB = auto homePath.maxDataSizeMB = 0 warmToColdScript = maxWarmDBCount = 300 minHotIdleSecsBeforeForceRoll = auto coldToFrozenDir = maxTotalDataSizeMB = 500000 maxConcurrentOptimizes = 6 maxRunningProcessGroupsLowPriority = 1 streamingTargetTsidxSyncPeriodMsec = 5000 journalCompression = gzip quarantineFutureSecs = 2592000 splitByIndexKeys = sync = 0 serviceOnlyAsNeeded = true [_audit] bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\audit\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\audit\thaweddb tstatsHomePath = volume:_splunk_summaries\audit\datamodel_summary homePath = $SPLUNK_DB\audit\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 5120 rtRouterQueueSize = [_internal] bucketRebuildMemoryHint = 0 syncMeta = 1 maxHotSpanSecs = 432000 compressRawdata = 1 coldPath = $SPLUNK_DB\_internaldb\colddb minHotIdleSecsBeforeForceRoll = 0 maxDataSize = 1000 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_internaldb\thaweddb tstatsHomePath = volume:_splunk_summaries\_internaldb\datamodel_summary homePath = $SPLUNK_DB\_internaldb\db rtRouterThreads = enableTsidxReduction = 0 maxTotalDataSizeMB = 25600 frozenTimePeriodInSecs = 188697600 rtRouterQueueSize = [_introspection] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\_introspection\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_introspection\thaweddb homePath = $SPLUNK_DB\_introspection\db rtRouterThreads = maxDataSize = 1024 maxTotalDataSizeMB = 5120 frozenTimePeriodInSecs = 1209600 rtRouterQueueSize = [_telemetry] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\_telemetry\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_telemetry\thaweddb homePath = $SPLUNK_DB\_telemetry\db rtRouterThreads = maxDataSize = 256 maxTotalDataSizeMB = 500 frozenTimePeriodInSecs = 63072000 rtRouterQueueSize = [_thefishbucket] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\fishbucket\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\fishbucket\thaweddb tstatsHomePath = volume:_splunk_summaries\fishbucket\datamodel_summary homePath = $SPLUNK_DB\fishbucket\db rtRouterThreads = maxDataSize = 500 maxTotalDataSizeMB = 500 frozenTimePeriodInSecs = 188697600 rtRouterQueueSize = [history] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\historydb\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\historydb\thaweddb tstatsHomePath = volume:_splunk_summaries\historydb\datamodel_summary homePath = $SPLUNK_DB\historydb\db rtRouterThreads = maxDataSize = 10 maxTotalDataSizeMB = 500 frozenTimePeriodInSecs = 604800 rtRouterQueueSize = [main] enableOnlineBucketRepair = 1 bucketRebuildMemoryHint = 0 syncMeta = 1 minHotIdleSecsBeforeForceRoll = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\defaultdb\colddb maxHotBuckets = 10 maxDataSize = auto_high_volume maxConcurrentOptimizes = 6 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\defaultdb\thaweddb tstatsHomePath = volume:_splunk_summaries\defaultdb\datamodel_summary homePath = $SPLUNK_DB\defaultdb\db rtRouterThreads = enableTsidxReduction = 0 maxHotIdleSecs = 86400 maxTotalDataSizeMB = 10240 rtRouterQueueSize = [splunklogger] disabled = true bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\splunklogger\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\splunklogger\thaweddb homePath = $SPLUNK_DB\splunklogger\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 500 rtRouterQueueSize = [summary] bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\summarydb\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\summarydb\thaweddb tstatsHomePath = volume:_splunk_summaries\summarydb\datamodel_summary homePath = $SPLUNK_DB\summarydb\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 500 rtRouterQueueSize = [volume:_splunk_summaries] path = $SPLUNK_DB   Modified indexes.conf: [default] serviceSubtaskTimingPeriod = 30 serviceInactiveIndexesPeriod = 60 enableRealtimeSearch = true timePeriodInSecBeforeTsidxReduction = 604800 serviceMetaPeriod = 25 defaultDatabase = main rotatePeriodInSecs = 60 rtRouterThreads = 0 enableTsidxReduction = false maxHotIdleSecs = 0 bucketRebuildMemoryHint = auto suspendHotRollByDeleteQuery = false maxHotSpanSecs = 7776000 suppressBannerList = maxBucketSizeCacheEntries = 0 hotBucketTimeRefreshInterval = 10 maxHotBuckets = 3 processTrackerServiceInterval = 1 maxDataSize = auto maxRunningProcessGroups = 8 minRawFileSyncSecs = disable enableDataIntegrityControl = false minStreamGroupQueueSize = 2000 maxMetaEntries = 1000000 throttleCheckPeriod = 15 tstatsHomePath = volume:_splunk_summaries\$_index_name\datamodel_summary tsidxReductionCheckPeriodInSec = 600 maxBloomBackfillBucketAge = 30d datatype = event syncMeta = true partialServiceMetaPeriod = 0 frozenTimePeriodInSecs = 188697600 maxGlobalDataSizeMB = 0 quarantinePastSecs = 77760000 compressRawdata = true coldToFrozenScript = coldPath.maxDataSizeMB = 0 enableOnlineBucketRepair = true repFactor = 0 rtRouterQueueSize = 10000 maxTimeUnreplicatedWithAcks = 60 assureUTF8 = false maxTimeUnreplicatedNoAcks = 300 rawChunkSizeBytes = 131072 memPoolMB = auto homePath.maxDataSizeMB = 0 warmToColdScript = maxWarmDBCount = 300 minHotIdleSecsBeforeForceRoll = auto coldToFrozenDir = maxTotalDataSizeMB = 500000 maxConcurrentOptimizes = 6 maxRunningProcessGroupsLowPriority = 1 streamingTargetTsidxSyncPeriodMsec = 5000 journalCompression = gzip quarantineFutureSecs = 2592000 splitByIndexKeys = sync = 0 serviceOnlyAsNeeded = true [_audit] bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\audit\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\audit\thaweddb tstatsHomePath = volume:_splunk_summaries\audit\datamodel_summary homePath = $SPLUNK_DB\audit\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 5120 rtRouterQueueSize = [_internal] bucketRebuildMemoryHint = 0 syncMeta = 1 maxHotSpanSecs = 432000 compressRawdata = 1 coldPath = $SPLUNK_DB\_internaldb\colddb minHotIdleSecsBeforeForceRoll = 0 maxDataSize = 1000 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_internaldb\thaweddb tstatsHomePath = volume:_splunk_summaries\_internaldb\datamodel_summary homePath = $SPLUNK_DB\_internaldb\db rtRouterThreads = enableTsidxReduction = 0 maxTotalDataSizeMB = 51200 frozenTimePeriodInSecs = 188697600 rtRouterQueueSize = archiver.enableDataArchive = 0 metric.enableFloatingPointCompression = 1 selfStorageThreads = tsidxWritingLevel = [_introspection] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\_introspection\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_introspection\thaweddb homePath = $SPLUNK_DB\_introspection\db rtRouterThreads = maxDataSize = 1024 maxTotalDataSizeMB = 5120 frozenTimePeriodInSecs = 1209600 rtRouterQueueSize = [_telemetry] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\_telemetry\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_telemetry\thaweddb homePath = $SPLUNK_DB\_telemetry\db rtRouterThreads = maxDataSize = 256 maxTotalDataSizeMB = 500 frozenTimePeriodInSecs = 63072000 rtRouterQueueSize = [_thefishbucket] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\fishbucket\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\fishbucket\thaweddb tstatsHomePath = volume:_splunk_summaries\fishbucket\datamodel_summary homePath = $SPLUNK_DB\fishbucket\db rtRouterThreads = maxDataSize = 500 maxTotalDataSizeMB = 500 frozenTimePeriodInSecs = 188697600 rtRouterQueueSize = [history] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\historydb\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\historydb\thaweddb tstatsHomePath = volume:_splunk_summaries\historydb\datamodel_summary homePath = $SPLUNK_DB\historydb\db rtRouterThreads = maxDataSize = 10 maxTotalDataSizeMB = 500 frozenTimePeriodInSecs = 604800 rtRouterQueueSize = [main] enableOnlineBucketRepair = 1 bucketRebuildMemoryHint = 0 syncMeta = 1 minHotIdleSecsBeforeForceRoll = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\defaultdb\colddb maxHotBuckets = 10 maxDataSize = auto_high_volume maxConcurrentOptimizes = 6 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\defaultdb\thaweddb tstatsHomePath = volume:_splunk_summaries\defaultdb\datamodel_summary homePath = $SPLUNK_DB\defaultdb\db rtRouterThreads = enableTsidxReduction = 0 maxHotIdleSecs = 86400 maxTotalDataSizeMB = 10240 rtRouterQueueSize = [splunklogger] disabled = true bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\splunklogger\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\splunklogger\thaweddb homePath = $SPLUNK_DB\splunklogger\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 500 rtRouterQueueSize = [_syslog] bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\_syslog\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_syslog\thaweddb tstatsHomePath = volume:_splunk_summaries\_syslog\datamodel_summary homePath = $SPLUNK_DB\_syslog\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 10240 frozenTimePeriodInSecs = 7776000 rtRouterQueueSize = [_metrics] bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\_metrics\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_metrics\thaweddb tstatsHomePath = volume:_splunk_summaries\_metrics\datamodel_summary homePath = $SPLUNK_DB\_metrics\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 10240 frozenTimePeriodInSecs = 7776000 rtRouterQueueSize = [summary] bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\summarydb\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\summarydb\thaweddb tstatsHomePath = volume:_splunk_summaries\summarydb\datamodel_summary homePath = $SPLUNK_DB\summarydb\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 500 rtRouterQueueSize = [volume:_splunk_summaries] path = $SPLUNK_DB        
Typically you would make apps in the Splunk forwarders which have input.conf files. E.g. $SPLUNKHOME/etc/apps/inputapp1/local/inputs.conf In the inputs.conf file you would have a stanza that specif... See more...
Typically you would make apps in the Splunk forwarders which have input.conf files. E.g. $SPLUNKHOME/etc/apps/inputapp1/local/inputs.conf In the inputs.conf file you would have a stanza that specifies which logfile to collect: [monitor:///var/log/mylog.log] index=myindex sourcetype=mysourcetype You would also have an outputs.conf app in the forwarder which would specify the indexers to forward the logs to.
If I understand correctly, you would like your string to be converted into a .csv file, and attached to an email sent by SOAR? This would likely be a multi-step procedure: 1. Use a custom function ... See more...
If I understand correctly, you would like your string to be converted into a .csv file, and attached to an email sent by SOAR? This would likely be a multi-step procedure: 1. Use a custom function to write the CSV to a temporary file on disk. 2. Use a custom function to call phantom.vault_add() to convert the temporary file on disk into a CSV in the vault. 3. Pass the ID of the file in the vault to the email action block as the attachment input. I personally have never gotten the SMTP app to successfully send file attachments, so I would recommend testing sending email with a file already in a vault before starting steps 1 and 2.
I couldn't tell you how to get it to work with makeresults, but if you input a CEF log with the sourcetype "cefevents", then the CEF field extractions should work.
I wanted to change "Allowed Domains" field under Server Settings -> Email settings. When I do it using website I get following log in audit.log: "changes":[{"stanza":"email","properties":[{"name":"... See more...
I wanted to change "Allowed Domains" field under Server Settings -> Email settings. When I do it using website I get following log in audit.log: "changes":[{"stanza":"email","properties":[{"name":"allowedDomainList","new_value":"domain1.sk","old_value":""}, When new application is installed I can see following log line saying that value is changing: "changes":[{"stanza":"email","properties":[{"name":"allowedDomainList","new_value":"domain2.sk","old_value":""}, But difference is, that if value is changed using deployment application, I don't see accurate change on website - Allowed Domains is empty.   How would you do this?  
If you get this error during a search, it is likely because you have an automatic lookup configured which cannot resolve to a valid lookup file. I recommend searching for lookups with keywords like r... See more...
If you get this error during a search, it is likely because you have an automatic lookup configured which cannot resolve to a valid lookup file. I recommend searching for lookups with keywords like reply_code to see if it exists.
Hi @ITWhisperer Thanks for assisting in this matter. One additional request is Can i search from the json data/ request body as source.name=“My Recordings” and tabulate theaccessible_by.name. Please ... See more...
Hi @ITWhisperer Thanks for assisting in this matter. One additional request is Can i search from the json data/ request body as source.name=“My Recordings” and tabulate theaccessible_by.name. Please help me the splunk query. Thanks in Advance  
You could make a lookup containing the unix time when the API key expires, along with columns describing the key and where to renew it. Then you could make an alert in Splunk that checks if that unix... See more...
You could make a lookup containing the unix time when the API key expires, along with columns describing the key and where to renew it. Then you could make an alert in Splunk that checks if that unix time is X days away e.g. | inputlookup when_keys_expire.csv ``` 7*24*60*60 = 1 week worth of seconds ``` | where expirytime > (now() - 7*24*60*60) The downside to this is that you would have to manually set the lookup table separately when applying a new key.
My issue was with the 3 new internal indexes that Splunk Enterprise introduces. In short, my fix was to add the line  selectiveIndexing = true in the %SplunkHome%/etc/system/local/outputs.conf file. ... See more...
My issue was with the 3 new internal indexes that Splunk Enterprise introduces. In short, my fix was to add the line  selectiveIndexing = true in the %SplunkHome%/etc/system/local/outputs.conf file. Here is a link in the docs referring to this fix. Otherwise, I'm including the synopsis of the symptom/fix from the link I provided initially. Hope that helps. Resolution What causes symptom 1? Splunk Enterprise 9.2.0 introduces a scalable Deployment Server (DS) feature, which makes the DS tier more resilient and highly available. Under the hood, several new internal indexes are introduced to accommodate this feature: _dsphonehome _dsclient _dsappevent These indexes are defined in Splunk Enterprise 9.2.x by default. If your DS forwards its data to remote indexers and the indexers are running an older Splunk version, the latter will not have the above-mentioned indexes defined. This will result in the DS being unable to forward and search its DS/DC-related events. The DS's Forwarder Management UI is then unable to list the Deployment Clients (DCs), despite the clients phoning home without any issue.   Fix for symptom 1: The idea behind it is simple: As long as your DS can index its DS/DC events to the 3 indexes above and search them back, your clients should appear in the UI.   Steps: 1. Allow your DS to selectively index the phone home, client and app events to itself. This is especially applicable to on-prem DS that forwards data to Splunkcloud indexers, but it can be applied to a completely on-prem/cloud BYO environment as well.    Add the following parameters and values to the DS's outputs.conf file, followed by restarting the splunkd service. [indexAndForward] index = true selectiveIndexing = true   2. This step is applicable if your DS is forwarding its data to on-prem indexing tier and the indexers' version is older than 9.2.0:   Configure the 3 indexes mentioned earlier on your indexing tier. If your indexers are non-clustered, add the index definitions on each of them manually or using your preferred automation. If your indexers are clustered, push the index definitions from the Cluster Manager and enable replication (repFactor = auto) to benefit from cluster redundancy.   Sample indexes.conf configuration: [_dsphonehome] homePath = $SPLUNK_DB/_dsphonehome/db coldPath = $SPLUNK_DB/_dsphonehome/colddb thawedPath = $SPLUNK_DB/_dsphonehome/thaweddb # clustered indexers only # repFactor = auto [_dsappevent] homePath = $SPLUNK_DB/_dsappevent/db coldPath = $SPLUNK_DB/_dsappevent/colddb thawedPath = $SPLUNK_DB/_dsappevent/thaweddb # clustered indexers only # repFactor = auto [_dsclient] homePath = $SPLUNK_DB/_dsclient/db coldPath = $SPLUNK_DB/_dsclient/colddb thawedPath = $SPLUNK_DB/_dsclient/thaweddb # clustered indexers only # repFactor = auto   There is one additional step only if your DC sends its data to the indexers via an intermediate forwarder AND your intermediate forwarder's version is older than 9.2.x:   Add the following parameter and value to the intermediate forwarder's outputs.conf file, followed by a splunkd service restart. [tcpout] forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent)   At this point, the deployment clients should appear in the Forwarder Management UI > Clients tab.   Tips: If you still can't see the clients, run the following query on the DS and check whether it returns some events:  index=_ds*   If the query returns nothing and your DS is also a Distributed Monitoring Console instance, go to Settings >  Monitoring Console > Settings > General Setup. Locate your DMC (This instance) and click Edit > Edit Server Roles. Tick the Indexer role and click Save. Run the query again to confirm it is working.