All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello   I am new to Splunk. I wish to use the sign in information from Azure AD/Entra ID. Is there a way to get these logs (sign-in logs) in real-time? Or probably even the syslog for sign-in acti... See more...
Hello   I am new to Splunk. I wish to use the sign in information from Azure AD/Entra ID. Is there a way to get these logs (sign-in logs) in real-time? Or probably even the syslog for sign-in activity? I have been through Microsoft Log Analytics Workspace, it suggests latency for the same to be 20 sec to 3 min. Is there a way to reduce this? Is a documentation supporting confirming the latency limits?
you're right. I am trying to extract fields from JSON-data. I used botsv2 data, in "stream:smtp" sourcetype. This is my _raw data(I try to search index="botsv2" sourcetype="stream:smtp"). The _r... See more...
you're right. I am trying to extract fields from JSON-data. I used botsv2 data, in "stream:smtp" sourcetype. This is my _raw data(I try to search index="botsv2" sourcetype="stream:smtp"). The _raw data result. {"endtime":"2017-08-31T22:56:56.070751Z","timestamp":"2017-08-31T22:56:56.070751Z","ack_packets_in":0,"ack_packets_out":0,"bytes":72,"bytes_in":0,"bytes_out":72,"capture_hostname":"matar","client_rtt":0,"client_rtt_packets":0,"client_rtt_sum":0,"data_packets_in":0,"data_packets_out":1,"dest_ip":"172.31.38.181","dest_mac":"06:6A:51:FA:0A:B0","dest_port":25,"duplicate_packets_in":0,"duplicate_packets_out":0,"flow_id":"b6b9eb1b-e8e1-4cec-ab3c-f7223adc490a","greeting":"ip-172-31-38-181.us-west-2.compute.internal ESMTP Postfix (Ubuntu)","missing_packets_in":0,"missing_packets_out":0,"network_interface":"eth0","packets_in":0,"packets_out":1,"protocol_stack":"ip:tcp:smtp","reply_time":0,"request_ack_time":0,"request_time":0,"response_ack_time":24624,"response_code":220,"response_time":0,"sender_server":"ip-172-31-38-181.us-west-2.compute.internal","server_agent":"ESMTP Postfix (Ubuntu)","server_response":"220 ip-172-31-38-181.us-west-2.compute.internal ESMTP Postfix (Ubuntu)","server_rtt":0,"server_rtt_packets":0,"server_rtt_sum":0,"src_ip":"104.47.34.68","src_mac":"06:E3:CC:18:AA:33","src_port":37952,"time_taken":0,"transport":"tcp"} I have one more question. The raw data results I searched with index=botsv2 sourcetype="stream:smtp" and Why are the search results with index="botsv2" sourcetype="stream:smtp" attach_filename{}="*" different? The field I want to extract exists in the search results with index="botsv2" sourcetype="stream:smtp" attach_filename{}="*". Search Try: index="botsv2" sourcetype="stream:smtp" attach_filename{}="*" {"endtime":"2017-08-30T15:08:00.075698Z","timestamp":"2017-08-30T15:07:59.774655Z","ack_packets_in":0,"ack_packets_out":31,"attach_disposition":["attachment"],"attach_filename":["Saccharomyces_cerevisiae_patent.docx"],"attach_size":[142540],"attach_size_decoded":[104162],"attach_transfer_encoding":["base64"],"attach_type":["application/vnd.openxmlformats-officedocument.wordprocessingml.document"],"bytes":155976,"bytes_in":155939,"bytes_out":37,"capture_hostname":"matar","client_rtt":0,"client_rtt_packets":0,"client_rtt_sum":0,"content":["DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;\r\n d=jacobsmythe111.onmicrosoft.com; s=selector1-froth-ly;\r\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version;    
My first reaction is: regex is the wrong solution.  This looks like part of a JSON document.  Treating structured data as text string is just calling for trouble down the road.  Can you share raw eve... See more...
My first reaction is: regex is the wrong solution.  This looks like part of a JSON document.  Treating structured data as text string is just calling for trouble down the road.  Can you share raw events? (Anonymize as needed.) Or, if this is a developer's joke, and you only have this string in a field, let's call it field1, you can still use Splunk's JSON capability to extract data.  It's much more robust.  Something like this:   | eval field1 = "{" . field1 . "}" | spath input=field1   Your mock data will give attach_filename{} field1 image.png GoT.S7E2.BOTS.BOTS.BOTS.mkv.torrent {"attach_filename":["image.png","GoT.S7E2.BOTS.BOTS.BOTS.mkv.torrent"]} image.png Office2016_Patcher_For_OSX.torrent {"attach_filename":["image.png","Office2016_Patcher_For_OSX.torrent"]} image.png {"attach_filename":["image.png"]} Saccharomyces_cerevisiae_patent.docx {"attach_filename":["Saccharomyces_cerevisiae_patent.docx"]} Here is an emulation you can play with and compare with real data, if your developers really play such a joke.   | makeresults | fields - _* | eval field1 = split("\"attach_filename\":[\"image.png\",\"GoT.S7E2.BOTS.BOTS.BOTS.mkv.torrent\"] \"attach_filename\":[\"image.png\",\"Office2016_Patcher_For_OSX.torrent\"] \"attach_filename\":[\"image.png\"] \"attach_filename\":[\"Saccharomyces_cerevisiae_patent.docx\"]", " ") | mvexpand field1 ``` data emulation ```  
We are able to perform a successful iDRAC syslog sent to Splunk for Firmware version 3.xx but when its Firmware version 5.xx, we aren't successful. Any chance that it is related to the firmware that ... See more...
We are able to perform a successful iDRAC syslog sent to Splunk for Firmware version 3.xx but when its Firmware version 5.xx, we aren't successful. Any chance that it is related to the firmware that I need to configure? Both configuration are the same and our log collector picks up an udp packets from the iDRACs.
Does anyone know why eventtype [wineventlog_index_windows] definition= index=wineventlog OR index=main doesn't return something? Am I doing something wrong in the eventtypes.conf file or should I... See more...
Does anyone know why eventtype [wineventlog_index_windows] definition= index=wineventlog OR index=main doesn't return something? Am I doing something wrong in the eventtypes.conf file or should I declare it somewhere else as well? Thank you very much
Hi @scelikok , Thanks. I checked the URL, and it is an error page and I can't locate the requestid from the splunk internal logs. Regarding the set_permissions.sh, yes, I have run it using root... See more...
Hi @scelikok , Thanks. I checked the URL, and it is an error page and I can't locate the requestid from the splunk internal logs. Regarding the set_permissions.sh, yes, I have run it using root in my Splunk instance. Just make it clear, I don't need to run similar script on the Windows server where I deployed the Splunk_TA_Stream app, correct?   Thanks again.    
I would like to automatically extract fields using props.conf. When there is a pattern like the one below, what I want to extract is each file name. attach_filename:[""] contains one or two file nam... See more...
I would like to automatically extract fields using props.conf. When there is a pattern like the one below, what I want to extract is each file name. attach_filename:[""] contains one or two file names. How can I extract all file names?   "attach_filename":["image.png","GoT.S7E2.BOTS.BOTS.BOTS.mkv.torrent"] "attach_filename":["image.png","Office2016_Patcher_For_OSX.torrent"] "attach_filename":["image.png"] "attach_filename":["Saccharomyces_cerevisiae_patent.docx"]   field extract will be store file_name   file_name : image.png,  Saccharomyces_cerevisiae_patent.docx,  GoT.S7E2.BOTS.BOTS.BOTS.mkv.torrent, Office2016_Patcher_For_OSX.torrent  
have one forwarder and three indexer servers. Each indexer server holds the indexes index-card,  index=bank, index=error.
I can't answer your specific question other than to ask if you have inputs or tokens, which are discussed from other users who have similar questions. However, there is a relatively new app https:/... See more...
I can't answer your specific question other than to ask if you have inputs or tokens, which are discussed from other users who have similar questions. However, there is a relatively new app https://splunkbase.splunk.com/app/7171 which may be of interest, as it will give you more control over the output.  
If you use Splunk Enterprise or SplunkCloud, you can guard against loss of data when forwarding by enabling the indexer acknowledgment capability. With indexer acknowledgment, the forwarder will rese... See more...
If you use Splunk Enterprise or SplunkCloud, you can guard against loss of data when forwarding by enabling the indexer acknowledgment capability. With indexer acknowledgment, the forwarder will resend any data that the receiver does not acknowledge as "received". You enable indexer acknowledgment on the forwarder, in the outputs.conf file. See how acknowledgement works. See how acknowledgement failure is handled. However if the forwarder is restarted/stopped while waiting for indexer acknowledgment, in most of the cases unacknowledged data is not resend upon forwarder restart/start. That is because indexer acknowledgment is just an agreement  between the output processor and the target server.  There are 4 major input types: File inputs(monitor/batch mode) Modular inputs Network inputs(TCP/UDP) HTTP inputs(http receiver endpoints/http event collector) Only the file input in the monitor mode can resend data if the forwarder is restarted/stopped while waiting for indexer acknowledgment.   Acknowledgement is sent back to the forwarder after replication factor is met. That means for rep factor 3, source indexer waits on acknowledgement from 2 replication target indexers. Inputs on forwarders are not aware of the indexer acknowledgement process. Latency increases when the target server is not an indexer as the intermediate tier will wait for acknowledgement before acknowledging back to edge forwarder. For more information, see the outputs.conf spec file.
If you have access to the _audit index, run this search index=_audit host="*" sourcetype=audittrail action=search (info=granted OR info=completed OR info=canceled) provenance=UI:dashboard* | rex fie... See more...
If you have access to the _audit index, run this search index=_audit host="*" sourcetype=audittrail action=search (info=granted OR info=completed OR info=canceled) provenance=UI:dashboard* | rex field=provenance "UI:[Dd]ashboard:(?<dashboard>.*") | timecart count by dashboard Note that if the provenance has D for dashboard, it is a classic Simple XML dashboard studio dashboard whereas if it's a lower case d, it is a dashboard studio dashboard.  
I am also experiencing this issue, though it occurred after moving from Ubuntu to a stigged RHEL 9.  I am curious on what the fix was for you because I've tried everything I can think of.
| rex "requestBody (?<requestBody>\{.*\})$" | spath input=requestBody source.collaborators.entries{}.accessible_by.name output=accessible_by.name | spath input=requestBody source.name | where 'source... See more...
| rex "requestBody (?<requestBody>\{.*\})$" | spath input=requestBody source.collaborators.entries{}.accessible_by.name output=accessible_by.name | spath input=requestBody source.name | where 'source.name'="My recordings"
You'll need to provide more information - are you exporting to CSV when you say null values, do you mean it has the word NULL or is there just an empty field in the exported data (i.e. ,, in CSV) ... See more...
You'll need to provide more information - are you exporting to CSV when you say null values, do you mean it has the word NULL or is there just an empty field in the exported data (i.e. ,, in CSV) how are you exporting the data, is it just using the export button underneath the search or are you trying to export using the Export button at the top of the dashboard  
how can i troubleshoot when using a dashboard to export data, the data exported has numerous NULL values where there should be actual data in the table output in splunk    ... See more...
how can i troubleshoot when using a dashboard to export data, the data exported has numerous NULL values where there should be actual data in the table output in splunk    
Try launching the search with the token values filled in while the dropdown is set to a non-ALL value. You could also post a sanitized version of your search so that we can see if something looks bro... See more...
Try launching the search with the token values filled in while the dropdown is set to a non-ALL value. You could also post a sanitized version of your search so that we can see if something looks broken.
are you file monitoring? If you are, the issue has to do with the following: When using File monitoring input or folder monitoring input do not use recursive search or three dot notations (...) ins... See more...
are you file monitoring? If you are, the issue has to do with the following: When using File monitoring input or folder monitoring input do not use recursive search or three dot notations (...) instead prefer to use non-recursive search or asterisk notation (*). Example: [monitor:///home/*/.bash_history] is much better programmatically then [monitor:///home/.../.bash_history] When you must compulsorily use recursive search, then: Make sure no. of total files under the main root directory you are searching is not huge. Make sure there are no cyclic links that could cause Splunk to go into an infinite loop.
I want to start of by saying I am extremely new to splunk, so please bear with me, I'm not sure at all if I'm on the right track so please feel let me know if I need to try something else. I have t... See more...
I want to start of by saying I am extremely new to splunk, so please bear with me, I'm not sure at all if I'm on the right track so please feel let me know if I need to try something else. I have two Cisco ASA5506s are used as firewalls. Searching for either of their hostnames only yields results for about 17 days or so. So if today is the 1st day, it will overwrite the 17th day to record tomorrows logs. Since all I was doing was just trying to get a total view of how many total entries it's pulling from all indexes I wasn't sure which index could be the reason why it's not logging past 17 days. Poking around I found that the _syslog and _metrics indexes both only had logs 14-15 days old. So that lead me to modify the indexes.conf file, however this did not help log the firewalls past 17 days. What else should I be looking for? These devices see millions of hits daily, so that could possibly be contribiting to this as well.     Previous: Indexes.conf [default] serviceSubtaskTimingPeriod = 30 serviceInactiveIndexesPeriod = 60 enableRealtimeSearch = true timePeriodInSecBeforeTsidxReduction = 604800 serviceMetaPeriod = 25 defaultDatabase = main rotatePeriodInSecs = 60 rtRouterThreads = 0 enableTsidxReduction = false maxHotIdleSecs = 0 bucketRebuildMemoryHint = auto suspendHotRollByDeleteQuery = false maxHotSpanSecs = 7776000 suppressBannerList = maxBucketSizeCacheEntries = 0 hotBucketTimeRefreshInterval = 10 maxHotBuckets = 3 processTrackerServiceInterval = 1 maxDataSize = auto maxRunningProcessGroups = 8 minRawFileSyncSecs = disable enableDataIntegrityControl = false minStreamGroupQueueSize = 2000 maxMetaEntries = 1000000 throttleCheckPeriod = 15 tstatsHomePath = volume:_splunk_summaries\$_index_name\datamodel_summary tsidxReductionCheckPeriodInSec = 600 maxBloomBackfillBucketAge = 30d datatype = event syncMeta = true partialServiceMetaPeriod = 0 frozenTimePeriodInSecs = 188697600 maxGlobalDataSizeMB = 0 quarantinePastSecs = 77760000 compressRawdata = true coldToFrozenScript = coldPath.maxDataSizeMB = 0 enableOnlineBucketRepair = true repFactor = 0 rtRouterQueueSize = 10000 maxTimeUnreplicatedWithAcks = 60 assureUTF8 = false maxTimeUnreplicatedNoAcks = 300 rawChunkSizeBytes = 131072 memPoolMB = auto homePath.maxDataSizeMB = 0 warmToColdScript = maxWarmDBCount = 300 minHotIdleSecsBeforeForceRoll = auto coldToFrozenDir = maxTotalDataSizeMB = 500000 maxConcurrentOptimizes = 6 maxRunningProcessGroupsLowPriority = 1 streamingTargetTsidxSyncPeriodMsec = 5000 journalCompression = gzip quarantineFutureSecs = 2592000 splitByIndexKeys = sync = 0 serviceOnlyAsNeeded = true [_audit] bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\audit\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\audit\thaweddb tstatsHomePath = volume:_splunk_summaries\audit\datamodel_summary homePath = $SPLUNK_DB\audit\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 5120 rtRouterQueueSize = [_internal] bucketRebuildMemoryHint = 0 syncMeta = 1 maxHotSpanSecs = 432000 compressRawdata = 1 coldPath = $SPLUNK_DB\_internaldb\colddb minHotIdleSecsBeforeForceRoll = 0 maxDataSize = 1000 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_internaldb\thaweddb tstatsHomePath = volume:_splunk_summaries\_internaldb\datamodel_summary homePath = $SPLUNK_DB\_internaldb\db rtRouterThreads = enableTsidxReduction = 0 maxTotalDataSizeMB = 25600 frozenTimePeriodInSecs = 188697600 rtRouterQueueSize = [_introspection] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\_introspection\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_introspection\thaweddb homePath = $SPLUNK_DB\_introspection\db rtRouterThreads = maxDataSize = 1024 maxTotalDataSizeMB = 5120 frozenTimePeriodInSecs = 1209600 rtRouterQueueSize = [_telemetry] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\_telemetry\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_telemetry\thaweddb homePath = $SPLUNK_DB\_telemetry\db rtRouterThreads = maxDataSize = 256 maxTotalDataSizeMB = 500 frozenTimePeriodInSecs = 63072000 rtRouterQueueSize = [_thefishbucket] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\fishbucket\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\fishbucket\thaweddb tstatsHomePath = volume:_splunk_summaries\fishbucket\datamodel_summary homePath = $SPLUNK_DB\fishbucket\db rtRouterThreads = maxDataSize = 500 maxTotalDataSizeMB = 500 frozenTimePeriodInSecs = 188697600 rtRouterQueueSize = [history] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\historydb\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\historydb\thaweddb tstatsHomePath = volume:_splunk_summaries\historydb\datamodel_summary homePath = $SPLUNK_DB\historydb\db rtRouterThreads = maxDataSize = 10 maxTotalDataSizeMB = 500 frozenTimePeriodInSecs = 604800 rtRouterQueueSize = [main] enableOnlineBucketRepair = 1 bucketRebuildMemoryHint = 0 syncMeta = 1 minHotIdleSecsBeforeForceRoll = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\defaultdb\colddb maxHotBuckets = 10 maxDataSize = auto_high_volume maxConcurrentOptimizes = 6 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\defaultdb\thaweddb tstatsHomePath = volume:_splunk_summaries\defaultdb\datamodel_summary homePath = $SPLUNK_DB\defaultdb\db rtRouterThreads = enableTsidxReduction = 0 maxHotIdleSecs = 86400 maxTotalDataSizeMB = 10240 rtRouterQueueSize = [splunklogger] disabled = true bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\splunklogger\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\splunklogger\thaweddb homePath = $SPLUNK_DB\splunklogger\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 500 rtRouterQueueSize = [summary] bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\summarydb\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\summarydb\thaweddb tstatsHomePath = volume:_splunk_summaries\summarydb\datamodel_summary homePath = $SPLUNK_DB\summarydb\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 500 rtRouterQueueSize = [volume:_splunk_summaries] path = $SPLUNK_DB   Modified indexes.conf: [default] serviceSubtaskTimingPeriod = 30 serviceInactiveIndexesPeriod = 60 enableRealtimeSearch = true timePeriodInSecBeforeTsidxReduction = 604800 serviceMetaPeriod = 25 defaultDatabase = main rotatePeriodInSecs = 60 rtRouterThreads = 0 enableTsidxReduction = false maxHotIdleSecs = 0 bucketRebuildMemoryHint = auto suspendHotRollByDeleteQuery = false maxHotSpanSecs = 7776000 suppressBannerList = maxBucketSizeCacheEntries = 0 hotBucketTimeRefreshInterval = 10 maxHotBuckets = 3 processTrackerServiceInterval = 1 maxDataSize = auto maxRunningProcessGroups = 8 minRawFileSyncSecs = disable enableDataIntegrityControl = false minStreamGroupQueueSize = 2000 maxMetaEntries = 1000000 throttleCheckPeriod = 15 tstatsHomePath = volume:_splunk_summaries\$_index_name\datamodel_summary tsidxReductionCheckPeriodInSec = 600 maxBloomBackfillBucketAge = 30d datatype = event syncMeta = true partialServiceMetaPeriod = 0 frozenTimePeriodInSecs = 188697600 maxGlobalDataSizeMB = 0 quarantinePastSecs = 77760000 compressRawdata = true coldToFrozenScript = coldPath.maxDataSizeMB = 0 enableOnlineBucketRepair = true repFactor = 0 rtRouterQueueSize = 10000 maxTimeUnreplicatedWithAcks = 60 assureUTF8 = false maxTimeUnreplicatedNoAcks = 300 rawChunkSizeBytes = 131072 memPoolMB = auto homePath.maxDataSizeMB = 0 warmToColdScript = maxWarmDBCount = 300 minHotIdleSecsBeforeForceRoll = auto coldToFrozenDir = maxTotalDataSizeMB = 500000 maxConcurrentOptimizes = 6 maxRunningProcessGroupsLowPriority = 1 streamingTargetTsidxSyncPeriodMsec = 5000 journalCompression = gzip quarantineFutureSecs = 2592000 splitByIndexKeys = sync = 0 serviceOnlyAsNeeded = true [_audit] bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\audit\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\audit\thaweddb tstatsHomePath = volume:_splunk_summaries\audit\datamodel_summary homePath = $SPLUNK_DB\audit\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 5120 rtRouterQueueSize = [_internal] bucketRebuildMemoryHint = 0 syncMeta = 1 maxHotSpanSecs = 432000 compressRawdata = 1 coldPath = $SPLUNK_DB\_internaldb\colddb minHotIdleSecsBeforeForceRoll = 0 maxDataSize = 1000 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_internaldb\thaweddb tstatsHomePath = volume:_splunk_summaries\_internaldb\datamodel_summary homePath = $SPLUNK_DB\_internaldb\db rtRouterThreads = enableTsidxReduction = 0 maxTotalDataSizeMB = 51200 frozenTimePeriodInSecs = 188697600 rtRouterQueueSize = archiver.enableDataArchive = 0 metric.enableFloatingPointCompression = 1 selfStorageThreads = tsidxWritingLevel = [_introspection] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\_introspection\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_introspection\thaweddb homePath = $SPLUNK_DB\_introspection\db rtRouterThreads = maxDataSize = 1024 maxTotalDataSizeMB = 5120 frozenTimePeriodInSecs = 1209600 rtRouterQueueSize = [_telemetry] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\_telemetry\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_telemetry\thaweddb homePath = $SPLUNK_DB\_telemetry\db rtRouterThreads = maxDataSize = 256 maxTotalDataSizeMB = 500 frozenTimePeriodInSecs = 63072000 rtRouterQueueSize = [_thefishbucket] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\fishbucket\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\fishbucket\thaweddb tstatsHomePath = volume:_splunk_summaries\fishbucket\datamodel_summary homePath = $SPLUNK_DB\fishbucket\db rtRouterThreads = maxDataSize = 500 maxTotalDataSizeMB = 500 frozenTimePeriodInSecs = 188697600 rtRouterQueueSize = [history] bucketRebuildMemoryHint = 0 syncMeta = 1 compressRawdata = 1 coldPath = $SPLUNK_DB\historydb\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\historydb\thaweddb tstatsHomePath = volume:_splunk_summaries\historydb\datamodel_summary homePath = $SPLUNK_DB\historydb\db rtRouterThreads = maxDataSize = 10 maxTotalDataSizeMB = 500 frozenTimePeriodInSecs = 604800 rtRouterQueueSize = [main] enableOnlineBucketRepair = 1 bucketRebuildMemoryHint = 0 syncMeta = 1 minHotIdleSecsBeforeForceRoll = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\defaultdb\colddb maxHotBuckets = 10 maxDataSize = auto_high_volume maxConcurrentOptimizes = 6 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\defaultdb\thaweddb tstatsHomePath = volume:_splunk_summaries\defaultdb\datamodel_summary homePath = $SPLUNK_DB\defaultdb\db rtRouterThreads = enableTsidxReduction = 0 maxHotIdleSecs = 86400 maxTotalDataSizeMB = 10240 rtRouterQueueSize = [splunklogger] disabled = true bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\splunklogger\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\splunklogger\thaweddb homePath = $SPLUNK_DB\splunklogger\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 500 rtRouterQueueSize = [_syslog] bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\_syslog\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_syslog\thaweddb tstatsHomePath = volume:_splunk_summaries\_syslog\datamodel_summary homePath = $SPLUNK_DB\_syslog\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 10240 frozenTimePeriodInSecs = 7776000 rtRouterQueueSize = [_metrics] bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\_metrics\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\_metrics\thaweddb tstatsHomePath = volume:_splunk_summaries\_metrics\datamodel_summary homePath = $SPLUNK_DB\_metrics\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 10240 frozenTimePeriodInSecs = 7776000 rtRouterQueueSize = [summary] bucketRebuildMemoryHint = 0 compressRawdata = 1 coldPath = $SPLUNK_DB\summarydb\colddb minHotIdleSecsBeforeForceRoll = 0 enableTsidxReduction = 0 enableOnlineBucketRepair = 1 suspendHotRollByDeleteQuery = 0 enableDataIntegrityControl = 0 thawedPath = $SPLUNK_DB\summarydb\thaweddb tstatsHomePath = volume:_splunk_summaries\summarydb\datamodel_summary homePath = $SPLUNK_DB\summarydb\db rtRouterThreads = syncMeta = 1 maxTotalDataSizeMB = 500 rtRouterQueueSize = [volume:_splunk_summaries] path = $SPLUNK_DB        
Typically you would make apps in the Splunk forwarders which have input.conf files. E.g. $SPLUNKHOME/etc/apps/inputapp1/local/inputs.conf In the inputs.conf file you would have a stanza that specif... See more...
Typically you would make apps in the Splunk forwarders which have input.conf files. E.g. $SPLUNKHOME/etc/apps/inputapp1/local/inputs.conf In the inputs.conf file you would have a stanza that specifies which logfile to collect: [monitor:///var/log/mylog.log] index=myindex sourcetype=mysourcetype You would also have an outputs.conf app in the forwarder which would specify the indexers to forward the logs to.
If I understand correctly, you would like your string to be converted into a .csv file, and attached to an email sent by SOAR? This would likely be a multi-step procedure: 1. Use a custom function ... See more...
If I understand correctly, you would like your string to be converted into a .csv file, and attached to an email sent by SOAR? This would likely be a multi-step procedure: 1. Use a custom function to write the CSV to a temporary file on disk. 2. Use a custom function to call phantom.vault_add() to convert the temporary file on disk into a CSV in the vault. 3. Pass the ID of the file in the vault to the email action block as the attachment input. I personally have never gotten the SMTP app to successfully send file attachments, so I would recommend testing sending email with a file already in a vault before starting steps 1 and 2.