All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to monitor a file share for when a file is loaded and then zip that file to send in email to internal users. Is this possible? The way our message server it can’t mail attachments larger than ... See more...
I want to monitor a file share for when a file is loaded and then zip that file to send in email to internal users. Is this possible? The way our message server it can’t mail attachments larger than 5.5MB. Appreciate the assistance thank you. 
Hi  First , I would like to thank everyone in this community who guided and helped me a lot.  Now i have a problem executing the below rex command User agent - Mozilla/5.0 (Linux; Android 8.1.0; A... See more...
Hi  First , I would like to thank everyone in this community who guided and helped me a lot.  Now i have a problem executing the below rex command User agent - Mozilla/5.0 (Linux; Android 8.1.0; ASUS_X00ID) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.101 Mobile Safari/537.36 REX - \(\w+;\s+(?<os_family>\w+)\s(?<os_version>\w+[^ ]+)\s(?<device_brand_model>\w+).\s(?<browser_engine>\w+)\D(?<brow_engine_version>\w+[^ ]+)\s+\(.+\)\s+(?<browser>\w+).(?<browser_version>\w+[^ ]+)\s+(?<hardware_type>\w+) I tested the rex command in " regex101.com and it match the information correction and i getting the output as expected.  However when i tried executing the same command in Splunk i am getting a blank screen in the Statistics view.     
Hello Splunk Community, I would really appreciate any guidance. I become a bit more familiar with Splunk, but at this moment nothing I have tried has worked. I basically need to know where a value i... See more...
Hello Splunk Community, I would really appreciate any guidance. I become a bit more familiar with Splunk, but at this moment nothing I have tried has worked. I basically need to know where a value is being logged between two  different fields. This is the scenario: 1. If 0 is the count for value (employeeID) between pdf and CSV - dont show.  2. When value (employeeId)  count > 0 need to know if its logged in either the pdf or CSV
ある CSV ログファイルのフィルド名は日本語文字と英語文字で混ぜています。中身も同じです。 文字コード:SHIFT-JIS Splunk のデフォルト sourcetype : csv を使ってから中身はちゃんと認識されていますが、フィルド名は問題があります。   その問題は、 raw data のフィールド名:Windows Update 実行時間(WSUS) Splunk が認... See more...
ある CSV ログファイルのフィルド名は日本語文字と英語文字で混ぜています。中身も同じです。 文字コード:SHIFT-JIS Splunk のデフォルト sourcetype : csv を使ってから中身はちゃんと認識されていますが、フィルド名は問題があります。   その問題は、 raw data のフィールド名:Windows Update 実行時間(WSUS) Splunk が認識したフィルド名:Windows Update___________WSUS つまり、日本語の漢字や全角文字が認識できないの状態になっています。   しかし、その事情は Splunk Enterprise 8.0.x、8.1.1 しか発生していません。 7.3.x の環境で同じ CSV ファイルは全然問題ないです。(フィルド名、中身ちゃんと認識されています。)   バグを踏みましたか。 回答していただければ助かります。 お手数をおかけしますが、どうぞよろしくお願いいたします。
Hi everyone, We are having trouble with index cluster stability and I was given these configuration changes to make to our Index Cluster.  However, I am troubled because these are A LOT of changes.... See more...
Hi everyone, We are having trouble with index cluster stability and I was given these configuration changes to make to our Index Cluster.  However, I am troubled because these are A LOT of changes. The person who suggested this was offering these configurations because they said as your Splunk deployment grows it must be tuned (and that is logical enough) but I am still troubled by the sheer number of suggested configuration changes. We have three indexers in our cluster. I wanted to throw these configurations out there as fodder and see what youguys come back with.  Thanks! In server.conf on each indexer: [clustering] cxn_timeout = 300 send_timeout = 300 rcv_timeout = 300 heartbeat_period = 10   [httpServer] busyKeepAliveIdleTimeout = 180 streamInWriteTimeout = 30   [sslConfig] useClientSSLCompression = False   In server.conf on the Cluster Master: [clustering] executor_workers = 16 heartbeat_timeout = 300 cxn_timeout = 300 send_timeout = 300 rcv_timeout = 300 max_peer_build_load = 5 max_fixup_time_ms = 5000 max_peers_to_download_bundle = 5   [httpServer] busyKeepAliveIdleTimeout = 180 streamInWriteTimeout = 30   [sslConfig] useClientSSLCompression = false   In distsearch.conf on indexers and cluster master: [replicationSettings] sendRcvTimeout = 120   In distsearch.conf on all search heads: statusTimeout = 120 connectionTimeout =120 authTokenConnectionTimeout = 120 authTokenSendTimeout = 120 authTokenReceiveTimeout = 120 #receiveTimeout = 120   [replicationSettings] connectionTimeout =120 sendRcvTimeout = 120   in server.conf on the search heads: [sslConfig] useClientSSLCompression = false   See what I mean? That's a lot of changes! So many that it makes surprised and a little uncomfortable. If anybody has any specific experiences with these settings please let me know. Thanks! -TJ
Hello, I ingested some Azure data into splunk via event hub and would like to ask if you Could you please share some idea/alerts on Azure contents . If you have Azure/Splunk in your env , What are y... See more...
Hello, I ingested some Azure data into splunk via event hub and would like to ask if you Could you please share some idea/alerts on Azure contents . If you have Azure/Splunk in your env , What are you alerting on based on Azure logs ? Could you share some of the Alerts contents ?    Any help is much appreciated.     
Hello! The health reports do not reach me in my email but notifications and alerts do. Am I missing something to configure? It is a Saas implementation. Thanks for your help!    
Hello, I accidentally cleaned a KV store and I don't have the source data to recreate it.  I do have backups of the /var/lib/splunk/kvstore/mongo directory. Is it possible to overwrite the contents... See more...
Hello, I accidentally cleaned a KV store and I don't have the source data to recreate it.  I do have backups of the /var/lib/splunk/kvstore/mongo directory. Is it possible to overwrite the contents of the now empty KV store by copying the contents of my backup into the mongo folder backend? Thanks! Andrew
We have several apps in Splunkbase that were first published before 2017. As explained in this answer, the default app icon background was changed from white to transparent in 2017. Since our apps ar... See more...
We have several apps in Splunkbase that were first published before 2017. As explained in this answer, the default app icon background was changed from white to transparent in 2017. Since our apps are older, they still have the white background, which does not look so great with rounded corners: How can we get the background changed from white to transparent? Notes: In the icon's PNG the corners have always been transparent. We don't specify a color in default.xml
Hi, I always appreciate your taking the time to answer my question. We will connect independent systems using the L3 Switch and send the syslog to the cyber security operation center (CSOC) like a... See more...
Hi, I always appreciate your taking the time to answer my question. We will connect independent systems using the L3 Switch and send the syslog to the cyber security operation center (CSOC) like attached picture. (Network switch will send via syslog function, and Splunk forwarder will be installed on workstation)   Since the application that was installed in the workstation cannot be modified, the IP address of server cannot be changed. For example, if there is an application that communicates with a server having address of 192.168.0.10, it is impossible to change the server IP address because the application code cannot be modified.   Splunk Enterprise SIEM will be installed in CSOC, and Heavy Forwarder will be used as the agent for syslog transmission. If IP addresses are duplicated between independent systems, is there a problem in transmitting logs? Also, is there a function to transfer syslog by changing the source IP address to distinguish assets? Or is there another way to differentiate between assets?   Best regards,  
So I am in somewhat of a fun situation where we have multiple instances of Splunk installed each with their own index clusters and search head clusters.  I know you can configure search heads to sear... See more...
So I am in somewhat of a fun situation where we have multiple instances of Splunk installed each with their own index clusters and search head clusters.  I know you can configure search heads to search multiple index clusters, but not all my index clusters have the "same" data in a named index (mainly, index=main).  So what I was wondering is if I install all the apps from all of the instances onto the search head cluster that is configured to connect to all index clusters, I can tell those apps to only look to the appropriate index cluster that has the data they want?  I think I could accomplish this with sites maybe if I can tie an app to a site.  But I am not finding either index cluster or site configurations for individual apps.  The point would be to provide a single place to login and be able to see all the splunk data and to eventually retire the now extraneous search head clusters without the apps having to search multiple "main" indexes in clusters that don't have the data they are looking for.
I am very new to splunk, so I have a csv which which i want to show as a table in splunk and i did it using the table command, now i want to have a dropdown based on IDs column and when someone selec... See more...
I am very new to splunk, so I have a csv which which i want to show as a table in splunk and i did it using the table command, now i want to have a dropdown based on IDs column and when someone select any ID value from dropdown table should only show the selected ID rows. Secondly i want to change the color of the GAC_percent column cells based on their value such as if GAC_percent > 90.00% , cell color should be green.  Any help is much appreciated. Thanks  IDs Drop Features GAC_percent GAC   A 2004 11 P, 1 B for 2004 Trend 97.51% g A 2003 11 P, 1 B for 2003 Trend 88.00% y B 2003 12 P, 10B for 2003 Trend  89.00% y B 2004 3 P, 2 B for 20Q4 Trend  97.51% g  
Hi,   I have 2 indexers with different hardware specifications. Is it possible to form a cluster between these 2 nodes? I don’t see anything about it in the documentation https://docs.splunk.com/D... See more...
Hi,   I have 2 indexers with different hardware specifications. Is it possible to form a cluster between these 2 nodes? I don’t see anything about it in the documentation https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/Systemrequirements   Thanks in advance, Best regards
Windows security logs are sent to a heavy forwarder, which is configured to send these logs to a syslog server in addition to sending to the indexers. (We have additional tools that require the Windo... See more...
Windows security logs are sent to a heavy forwarder, which is configured to send these logs to a syslog server in addition to sending to the indexers. (We have additional tools that require the Windows security logs). So, right now the config is almost exactly by the book following these docs: https://docs.splunk.com/Documentation/SplunkCloud/8.1.2011/Forwarding/Forwarddatatothird-partysystemsd What we're trying to do is move these logs into a different tool and slowly remove them from Splunk indexes, going by region as each domain controller has a specific naming scheme, but retaining the forwarding to syslog. The problem I'm running into is that adding a nullroute also stops logs from coming into syslog. I think I understand why it's happening. The hostname matches in both stanzas, so the nullroute overwrites the syslog route. I've tried changing the order, making sure the syslog route is last, but that doesn't change anything. I've been looking and I can't figure out a way to avoid this. The only unique thing I can match on is hostname. I also don't see that transforms or props has a logical NOT, i.e., for these do this, for these do not do this. props.conf   [source::WinEventLog:Security] TRUNCATE = 0 TRANSFORMS-routing = routeAll, routeSubset, routeSubset2, routeNull   transforms.conf   [routeAll] REGEX=(.) DEST_KEY=_TCP_ROUTING FORMAT=splunkssl [routeSubset] SOURCE_KEY=MetaData:Host REGEX=(?i)(.*dc[0-9][0-9].*) DEST_KEY=_TCP_ROUTING FORMAT=splunkssl [routeSubset2] SOURCE_KEY=MetaData:Host REGEX=(?i)(.*dc[0-9][0-9].*) DEST_KEY=_SYSLOG_ROUTING FORMAT=my_syslog_server [routeNull] SOURCE_KEY=MetaData:Host REGEX=(?i)(.*region1dc[0-9][0-9].*|.*region2dc[0-9][0-9].*) DEST_KEY = queue FORMAT = nullQueue    
I have indexing data into Splunk. once the Cold bucket time period reached one month the data have to move to the frozen bucket every month. The frozen bucket is available in NAS file system. Could s... See more...
I have indexing data into Splunk. once the Cold bucket time period reached one month the data have to move to the frozen bucket every month. The frozen bucket is available in NAS file system. Could some one help me how to move data from cold bucket path to NAS path.  let me know in this scenario what will help  a. any script need to put ? b. we can mention directly in index stanza it will move ? Thanks in Advance.  
Hi all, We are trying to calculate SLA from Jira logs in our Splunk. What we want to achieve to calculate the time between Team field changes for a specific ticket. Our current and expected log resu... See more...
Hi all, We are trying to calculate SLA from Jira logs in our Splunk. What we want to achieve to calculate the time between Team field changes for a specific ticket. Our current and expected log results  are as below. Current: Time Team Ticket No 09/12/2020 08:22 Level 3 Ticket 1 08/12/2020 06:08 Level 2  Ticket 1 08/12/2020 04:08 Level 1 Ticket 1 09/12/2020 16:22 Level 3 Ticket 2 08/12/2020 12:08 Level 2  Ticket 2 08/12/2020 10:08 Level 1 Ticket 2   Expected: Ticket No Transition Time Ticket 1 Level 1 to Level 2 2 hours Ticket 1 Level  2 to level 3 2 hours,14 mins Ticket 2 Level 1 to Level 2 3 hours Ticket 2 Level  2 to level 3 2 hours,20 mins   I hope I explained clearly. Any help is really appreciated, thank you!
I have a file with full of logs from different sources. But i want to monitor only logs from a particular network device(cisco-ise). Please help me do it using props here in the example wherever <is... See more...
I have a file with full of logs from different sources. But i want to monitor only logs from a particular network device(cisco-ise). Please help me do it using props here in the example wherever <ise-hostname> those has to be monitored(means before going to indexer it should extract ise logs Oct 6 03:44:01 <hostname> rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="1294" x-info="http://www.rsyslog.com"] rsyslogd was HUPed Oct 6 03:44:02 <hostname> rhsmd: This system is registered to RHN Classic. Oct 6 03:44:06 <ise-hostname> <hostname>: Dropping Primary discovery request from AP - limit for maximum APs supported 30 reached Oct 6 03:40:16 <ise-hostname> CISE_Failed_Attempts 1 0 2019-10-06 03:40:16.968 +05:30 NOTICE Failed-Attempt: RADIUS Accounting-Request dropped, ConfigVersionId=62, Device IP Address=<ip-address>, Device Port=<PORT>, DestinationIPAddress=<ip-address>, DestinationPort=<PORT>, Protocol=Radius, User-Name=ppp, Acct-Status-Type=Start, Acct-Session Id=sfaksdaksf, Event-Ti mestamp=1569504083, AcsSessionID=<hostname>/asdasd, FailureReason=11007 Could not locate Network Device , Step=333, Step=55, Step=22, Step=11, #44 Oct 6 03:44:09 <hostname>: MOBILESTATION_NOT_FOUND: Could not find the mobile sadas in internal database Oct 6 03:40:26 <ise-hostname> CISE_Failed_Attempts 1 0 2019-10-06 03:40:26.180 +05:30 NOTICE Failed-Attempt: RADIUS Accounting-Request dropped, ConfigVersionId=62, Device IP Address=<ip-address>, Device Port=<port>, DestinationIPAddress=<ip-address>, DestinationPort=<port>, Protocol=Radius, User-Name=wipro, Acct-Status-Type=Start, Acct-Session-Id=sdfsdfs, Event-Timestamp=1569504083, AcsSessionID=dfsdf, FailureReason=33 Could not locate Network Device , Step=343, Step=231, Step=55, Step=11, #44  
Hello Splunker's I programmed a saved search with a send webhook data action to send the result in json format. I noticed that the data sent contains additional information like app name eand result... See more...
Hello Splunker's I programmed a saved search with a send webhook data action to send the result in json format. I noticed that the data sent contains additional information like app name eand result_link: INFO -: {"app" => "search", "results_link" => "http: // splk-sh: 8000 / app / search / search? .... In fact, I don't want to display this information on my results; i searched in advanced actions i found: action.webhook.command: sendalert $action_name$ results_file="$results.file$" results_link="$results.url$" i tried to delete result_link but it doesn't work.  did you encounter this problem on whebook or even email action can be the same. Thank you
I seem to have tied myself in a knot. I have data similar to: h1  h2   h3    h4 a    12  123  231 a    32  45    678 b   43   56   78 What I want is a chart of the totals for h2, h3 and h4, it'... See more...
I seem to have tied myself in a knot. I have data similar to: h1  h2   h3    h4 a    12  123  231 a    32  45    678 b   43   56   78 What I want is a chart of the totals for h2, h3 and h4, it's probably stunningly easy but for the life of me I can't get it. thanks.
Hello, I have the following problem with the anonymisation of a source. The source of data is::   \\summer.de\group\Anwendungen\Splunk\starbucks\*   their are following logs: 123456.log , ... See more...
Hello, I have the following problem with the anonymisation of a source. The source of data is::   \\summer.de\group\Anwendungen\Splunk\starbucks\*   their are following logs: 123456.log , 342618.log usw. example :   \\summer.de\group\Anwendungen\Splunk\starbucks\123456.log     Inputs.Conf (UF):   [monitor://\\summer.de\group\Anwendungen\Splunk\starbucks\*] sourcetype = log_starbucks_anonymized index = starbucks   Indexes.conf (IDX):   [starbucks] homePath = $SPLUNK_DB/starbucks/db coldPath = $SPLUNK_DB/starbucks/colddb thawedPath = $SPLUNK_DB/starbucks/thaweddb   Props.conf (IDX):   [log_starbucks_anonymized] MAX_EVENTS = 2000 MAX_TIMESTAMP_LOOKAHEAD = 50 NO_BINARY_CHECK = 1 TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N TRUNCATE = 50000 pulldown_type = 1 BREAK_ONLY_BEFORE = .+.{2}:.{2}:.{2},.{3} TRANSFORMS-anonymize = path_anonymizer_starbucks   Transforms.conf (IDX):   [path_anonymizer_starbucks] DEST_KEY = MetaData:Source FORMAT = $1XXXXXX$2 REGEX = (\\\\\w+\.\w+\\\w+\\\w+\\\w+\\\w+\\)\d{1,6}(\.\w+) SOURCE_KEY = MetaData:Source     Target: the source in Splunk currently looks like this:   \\summer.de\group\Anwendungen\Splunk\starbucks\123456.log   But it should look like this:   \\summer.de\group\Anwendungen\Splunk\starbucks\XXXXXX.log   Question: What have I overlooked? The apps and the Stanzas are in the right places and I can't find any "wrong" entries with btool. After changing the stanza on the cluster master, the change is applied using "apply-cluster- bundle" and is also displayed on the indexers in the cluster. I just can't find the error. I have already tried various REGEXes but unfortunately it does not bring about any change. thank you for your help.