All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am learning Splunk and playing with different log types. So far I have exported the CSV files and played around. I have some IIS logs (in txt format) that I want to mess with. Is there a way to imp... See more...
I am learning Splunk and playing with different log types. So far I have exported the CSV files and played around. I have some IIS logs (in txt format) that I want to mess with. Is there a way to import these to Splunk enterprise? I want to import the text files I have and play with logs create some searches and do more. Any help is appreciated. Thank you.
  When the indexes were created, they were created by default. Now that I needed to know how long a log went from hot to warm bucket and finally to frozen, I have the following question. 1. Is thi... See more...
  When the indexes were created, they were created by default. Now that I needed to know how long a log went from hot to warm bucket and finally to frozen, I have the following question. 1. Is this query correct to know how long it is configured to go to frozen? | rest / services / data / indexes | fields title froz * | rename title as index 2. If I need it not to store the logs for 6 years, which is the value that I see by default, and I need it to store the logs for 6 months, understanding that when the log reaches 6 months it would go to a frozen state and splunk would begin to eliminate the older data. 3. I should create a file called indexes.conf in the "local" folder and set the value frozenTimePeriodInSecs = 15778800 b Should I go to the bin and restart the splunk service for it to take the changes? 4. Would this change immediately erase logs that are already 6 months old in seconds? or does it start from this moment?
Hello, We have one universal forwarder, and two cloud instances.   Currently I have all data going to 1 indexer, I've been attempting to determine the most efficient way to parse and route the data ... See more...
Hello, We have one universal forwarder, and two cloud instances.   Currently I have all data going to 1 indexer, I've been attempting to determine the most efficient way to parse and route the data so that it will go to the correct indexer/splunk instance without effecting ingest volume.   If I ingest everything into the "primary" indexer can I just parse and route that raw data to the "secondary" indexer without effecting ingest volume on the "primary"?  I was originally going to use a heavy forwarder for the parsing and routing but sounds like that may be more network IO intensive.  
Hello,  Is there any possibility to pool SNMP data in cloud environment? Can I use UF to make a queries for particular OID? Do I need to run SNMP Modular input add-on on UF? thanks in advance,  Sz... See more...
Hello,  Is there any possibility to pool SNMP data in cloud environment? Can I use UF to make a queries for particular OID? Do I need to run SNMP Modular input add-on on UF? thanks in advance,  Szymon  
We're using AppDynamics dashboard to view several custom metrics that are published from our Java based apps every 30 seconds but when seen on AppDynamics, it aggregates all the metrics (published wi... See more...
We're using AppDynamics dashboard to view several custom metrics that are published from our Java based apps every 30 seconds but when seen on AppDynamics, it aggregates all the metrics (published within a minute) and shows the sum as 1 single data point on the graph. Instead, we would like to see 2 different data points plotted on the graph (30 seconds apart) with their individual observed values. Is this possible ? If yes, how ?
Going through the information on the add-on, it states  "The Microsoft API used by the add-on only supports Basic Authentication (username and password). " With Microsoft announcing it will be depr... See more...
Going through the information on the add-on, it states  "The Microsoft API used by the add-on only supports Basic Authentication (username and password). " With Microsoft announcing it will be deprecating basic auth later this year (Exchange Online deprecating Basic Authentication - Microsoft Lifecycle | Microsoft Docs) is anyone aware of any updates pending for the API or for the add-on to utilise Microsoft Graph API instead?
Dear community, I have to implement Oracle 12c audit and save/export audit data to a shared drive on the SYSLOG server. Splunk will get the data for this SYSLOG server. In a configuration like that... See more...
Dear community, I have to implement Oracle 12c audit and save/export audit data to a shared drive on the SYSLOG server. Splunk will get the data for this SYSLOG server. In a configuration like that, what is the best approach? Mixed auditing or Pure auditing? My approach is to define pure audit and create 2 procedures: Transfer UNIFIED_AUDIT_TRAIL to the SYSLOG server; Clean up the auditing data from time to time. With DBConnect, I must create a specific user, which I don't want. Is there a step-by-step guide in windows to implement Oracle 12c auditing? Thanks  
Hi Experts, Please help with regex to  parse the hh:mm:ss into separate filed as show below. message: hello this is the first message from splunk 12:45:13 hai this is the second message from splu... See more...
Hi Experts, Please help with regex to  parse the hh:mm:ss into separate filed as show below. message: hello this is the first message from splunk 12:45:13 hai this is the second message from splunk  hello this is the third message from splunk 19:43:53 expected outpout:   subject:                                                                                                        time: hello this is the first message from splunk                              12:45:13 hai this is the second message from splunk  hello this is the third message from splunk                              19:43:53 | rex field=message "(?<subject>.\w+)\s*(?<time>.\d+:\d+:\d+)?" but not worked. plz help thanks in advance.
I have a field called "Completed_On" in time format: 12/23/2020 14:16:51. I'd like to remove the hours, minutes, and seconds so it just displays 12/23/2020.  How can I do this? 
I need help to find a query that can list every source types and indexes of each and every app present in the search head or an instance. Is it possible to get the results using SPL?
Hi, When i drop traffic events on a Heavy Forwarder (fgt_traffic) my stanza don't work, its weird because in another heavy forwarder i have the same configuration and its works, mi props.conf and tr... See more...
Hi, When i drop traffic events on a Heavy Forwarder (fgt_traffic) my stanza don't work, its weird because in another heavy forwarder i have the same configuration and its works, mi props.conf and transforms.conf are: props.conf # FILTRO Eventos Fortinet Traffic [fgt_log] TRANSFORMS-filtro = filtrado_fortinet_traffic transforms.conf [filtrado_fortinet_traffic] SOURCE_KEY = _raw REGEX = \stype\=\"traffic\"\s DEST_KEY = queue FORMAT = nullQueue best regards. Diego.
Hello, I have an issue with Symantec Bluecoat Proxy SG when i index data to a heavy forwarder. The logs didn't parse correctly and the coverage is less than 5% of the total events, the sourcetype de... See more...
Hello, I have an issue with Symantec Bluecoat Proxy SG when i index data to a heavy forwarder. The logs didn't parse correctly and the coverage is less than 5% of the total events, the sourcetype defined is bluecoat:proxysg:access:syslog. What is the correct format of the log?. An example of the log received is attached. Thanks in advance.
I have an index cloud_stats on which I need to create a daily error count by source report, so that we can work on the error with max counts first and then others.  I have 4 error keywords to search ... See more...
I have an index cloud_stats on which I need to create a daily error count by source report, so that we can work on the error with max counts first and then others.  I have 4 error keywords to search as below : DailyErrorLogtTrigger Check if cloud return error or not Tracking Error Maualerrormessage For 1,2 I created below search that works perfectly. index="cloud_stats" "*ERROR*" | search ( "*DailyErrorLogtTrigger*" OR "*check if Cloud return error or not*" ) | rex field=_raw "INFO(.+(?=Step2)|[\s>]+)(?<Error>.+)" | table index source Error | stats count(Error) AS COUNT BY index source Error | sort -COUNT Output : And For 3,4 created below search, that also works perfectly. index="cloud_stats" "*ERROR*" | search ( "*Tracking Error*" OR "*Maualerrormessage*" ) | rex field=_raw "(?<Error>DisplayName[^,]+Tracking[^,]+)" | rex field=_raw "ManualStatsInfo.*,(?<Error>.*)" | table index source Error | stats count(Error) AS COUNT BY index source Error | sort -COUNT Output : Issues arises when I want to combine both of the above searches into one like below.  index="cloud_stats" "*ERROR*" | search ( "*Tracking Error*" OR "*Maualerrormessage*" OR "*DailyErrorLogtTrigger*" OR "*check if Cloud return error or not*" ) | rex field=_raw "(?<Error>DisplayName[^,]+Tracking[^,]+)" | rex field=_raw "ManualStatsInfo.*,(?<Error>.*)" | rex field=_raw "INFO(.+(?=Step2)|[\s>]+)(?<Error>.+)" | table index source Error | stats count(Error) AS COUNT BY index source Error | sort -COUNT   What happens here is that after combining the two searches, Warning part - I get error in rex command pointing to error in rex from the Ist search initially Frustrating part - And for some reason, second search (on Tracking error) starts to output the complete event ( in the red box in output) instead of filtered out keywords (in the green box in output) for some cases. Output : Those events are a page long and I don't want to create report over such 100's of events where each event is a page long. That will make the error trend analysis more cumbersome.  Can anyone please help on to how to get only the words that are filtered from regex (as in green box)  instead of complete event (as in red box)?
Hello. Currently, we are looking to resolve an issue using the TA for Microsoft Cloud Services.  The TA has been installed on our Heavy Forwarder and used to ingest IIS log files from a Storage Blob... See more...
Hello. Currently, we are looking to resolve an issue using the TA for Microsoft Cloud Services.  The TA has been installed on our Heavy Forwarder and used to ingest IIS log files from a Storage Blob.  Below are the error messages we receive ... ERROR TcpInputProc - Message rejected. Received unexpected message of size=369295616 bytes from src=xxx.xxx.xxx.xxx:65138 in streaming mode. Meximum message size allowed=67108864. Possible invalid source sending data to splunktcp port or valid source sending unsupported payload. ERROR ApplicationUpdater - Error checking for update, URL=https://apps.splunk.com/api/apps:resolve/checkforupgrade: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the 'openssl verify' command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. ERROR X509Verify - X509 certificate (CN=GlobalSign,O=GlobalSign Root CA - R3) failed validation; error=19, reason="self signed certificate in certificate chain" We have checked the SSL configurations ... server.conf [sslConfig] sslRootCAPath = /opt/splunk/etc/auth/cacert.pem inputs.conf [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = $SPLUNK_HOME/etc/auth/server.pem sslPassword = password requireClientCert = false And they look fine.  Has anyone else faced something similar?  If so, what did you to resolve this issue.  Thanks. Regards, Max  
Hi, I have a CloudTrail data source feeding into the AWS Add-On app on a single-instance Splunk deployment. If I go to the AWS app and do a search from within that app, Splunk is able to extrac... See more...
Hi, I have a CloudTrail data source feeding into the AWS Add-On app on a single-instance Splunk deployment. If I go to the AWS app and do a search from within that app, Splunk is able to extract all the interesting fields and populate them into key-vaule pairs just fine. However, I've built a dashboard using that data source and interesting fields in the S&R app and Splunk does not populate those same key-vaule pairs as it would in the AWS app. The only way to extract those key-vaule pairs from within the S&R app is to do a 'spath' search which is not the best way to build the searches in the dashboard. I've already checked the fields settings and it's showing all the AWS fields enabled globally in the permissions section. Has anybody experienced this issue before, or have any ideas where to poke at to get the fields to be extracted globally?
Hi , There is a way to extract a value from field even there is no = between Key and Value? After extracting I want to use them as a search criteria. Unfortunatelly I need to work with data which ... See more...
Hi , There is a way to extract a value from field even there is no = between Key and Value? After extracting I want to use them as a search criteria. Unfortunatelly I need to work with data which are not optimized for splunk. For example : I have the following raw field: "2020-12-16 13:39:00.7174 INFO 001d1764-80c3-4c35-87c7-ec25382b4328 IM_Contact with SetID Cardlink_DCDOB2012146196-1006 has current Status Completed. ContactID [CO-000085513778], CaseID [CA-000002980184] APOrchestrator.ProcessIncomingMessage => ServiceQueueOrchestrator`2.LogContactStatus => Logger.LogInfo"   I want to extract following key / values: Info = 001d1764-80c3-4c35-87c7-ec25382b4328 SetID = Cardlink_DCDOB2012146196-1006 Status = Completed ContactID = CO-000085513778 CaseID = CA-000002980184   Found some interesting answers but all of them working with real key value pairs (fields) as a basis.  
I have a Spring Boot Application using an HTTP Event Collector to send logs to splunk using a Log4j2 Appender. https://github.com/splunk/splunk-library-javalogging/blob/master/src/main/java/com/splun... See more...
I have a Spring Boot Application using an HTTP Event Collector to send logs to splunk using a Log4j2 Appender. https://github.com/splunk/splunk-library-javalogging/blob/master/src/main/java/com/splunk/logging/HttpEventCollectorLog4jAppender.java This appender is causing my application to hang and not terminate, after application is finished. 
I'm a novice user to Splunk and need a simple index search for account creation, time, and creator.  I'm on  closed domain and don't have the typical add ons.  Thank you in advance.
I've noticed that it's possible to run a playbook in scope of one single artifact using Playbook Debuger. Is there any way to make playbooks behave like this in normal usage also? To better describe... See more...
I've noticed that it's possible to run a playbook in scope of one single artifact using Playbook Debuger. Is there any way to make playbooks behave like this in normal usage also? To better describe what I mean let me use simplified example - let's say that I have a playbook that gathers names of the artifacts via 'format' block. Later on, text from this format block is used as a query in another block. So if there are 3 artifacts in this container, output from format block looks like this: "name1, name2, name3". And what I want to do is to get the first name, pass it to the next blocks until the whole process is finished and then come back to the start of this playbook, get the second name and so on. My playbook is working properly for single artifacts, but it fails while running on whole events, as my queries are not working with multiple arguments being passed. Is there any workaround for this? *UPDATE* According to Customize the format of your Splunk Phantom playbook content - Splunk Documentation I tried the option to include '%%' to the format block, but it did not change much. Still the next block is using whole format block output at once.
Hello, I have two Domain Controllers that are producing a lot of data, pushing my daily usage over the limit.  I saw that sourcetype WinNetMon and Perfmon:process have the most duplicate logs in the... See more...
Hello, I have two Domain Controllers that are producing a lot of data, pushing my daily usage over the limit.  I saw that sourcetype WinNetMon and Perfmon:process have the most duplicate logs in them.  How do I reduce the duplication before the indexing occurs saving me on my daily usage?   Thank You