All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunk Team! I want to use the website monitor app to monitor the URL with my ssl how do i config app? thank all
i am trying to break the events in the below data after each pipe (|),placed the props.conf on both UF and HF still doesn't apply but when I am trying the same props.conf in the UI (add data) befo... See more...
i am trying to break the events in the below data after each pipe (|),placed the props.conf on both UF and HF still doesn't apply but when I am trying the same props.conf in the UI (add data) before indexing the data it is working. HOSTNAME=**,PROGRAM=MANAGER,FILENAME=TEST,STATUS=UP|HOSTNAME=,PROGRAM=EXTRACT,FILENAME=TEST,STATUS=UP|HOSTNAM E=,PROGRAM=EXTRACT,FILENAME=TEST,STATUS=UP|HOSTNAME=,PROGRAM=EXTRACT,FILENAME=TEST,STATUS=UP|HOSTNAME=* ,PROGRAM=EXTRACT,FILENAME=TEST,STATUS=UP|HOSTNAME=***,PROGRAM=EXTRACT,FILENAME=TEST,STATUS=UP tried with below 2 props.conf. [sourcetype] EVENT_BREAKER_ENABLE=true EVENT_BREAKER=(|) [sourcetype] BREAK_ONLY_BEFORE = (|) I am using splunk 7.0 version
I am looking for Splunk ITSI videos and documents on how to configure it with Splunk Enterprise
Issue: We have a SmartStore deployment and are seeing continued steady growth in S3 space. It appears the data in SmartStore is not accepting the "frozenTimePeriodInSecs" we have specified on an in... See more...
Issue: We have a SmartStore deployment and are seeing continued steady growth in S3 space. It appears the data in SmartStore is not accepting the "frozenTimePeriodInSecs" we have specified on an index basis. I have been informed others are seeing this issue as well and I have been asked to open a case.
日本語(UTF-8)と数字や日付情報が入り混じった情報を読み込んでいます。 読み込みのChar-setは、AUTOを指定にしています。 読み込んだ結果を見ると問題なく、日本語が見えるのですが、何らかの検索をすると途端に表示が別の文字コードで変換され読めなくなります。(\xB7といったコード値に変わってしまいます。) 何度かインデックスを削除し、読み込み直しもしましたが結果が一緒でし... See more...
日本語(UTF-8)と数字や日付情報が入り混じった情報を読み込んでいます。 読み込みのChar-setは、AUTOを指定にしています。 読み込んだ結果を見ると問題なく、日本語が見えるのですが、何らかの検索をすると途端に表示が別の文字コードで変換され読めなくなります。(\xB7といったコード値に変わってしまいます。) 何度かインデックスを削除し、読み込み直しもしましたが結果が一緒でした。 読み込むファイルをSJISに保存しなおすと問題は発生しなくなるのは確認したのですが、クラウドから定期的にDLするデータであるため都度都度文字コードを変えるのは運用上現実的ではないと考えています。 何か対策はありますでしょうか? (CHAR-SETを明示的にUTF-8にもしてみましたが、結果は変わりませんでした。)
Hi, I used "Add Data: Files and Directories" function to add a 200MB csv file from my hard drive into Splunk Enterprise 8.0.2 (Trial Version, MacOS). In order to do that, I configured it with a custo... See more...
Hi, I used "Add Data: Files and Directories" function to add a 200MB csv file from my hard drive into Splunk Enterprise 8.0.2 (Trial Version, MacOS). In order to do that, I configured it with a custom sourcetype and a custom index called "bigdbook". As the result, the file was uploaded successfully. I checked again by looking in the Settings > Data: Indexes section and saw an increment at the value of the current size column, the value was 124MB. But after I shut down Splunk and started it back on, then I did several searches and found out that the data that I inputted was completely gone as the current size column's value in the indexes section returned to the default value 1MB. So far, I have tried reinstalling Splunk and switching to "Add Data: Monitor" but the problem remains the same. Please help me regarding this.
I am trying to update the ITSI action rule in the messages, with some different body of the message. Even after saving it, the message is not saved and returns to original. The steps taken are ... See more...
I am trying to update the ITSI action rule in the messages, with some different body of the message. Even after saving it, the message is not saved and returns to original. The steps taken are ITSI -> Configure -> Notable Event Aggregation Policies -> Service Level Event Grouping -> Action Rule —> Click on If a specific event occurs, then send an email for the episode -> Configure -> Edit the message column -> Ok -> Save Even after this when i click back to the event , it shows the same values in the messages
When I run my Splunk query, I get url field and the value of the field is like this https://location-server-aks-611ab294.test.australia.azm.io:443/api I would like to extract the words "locatio... See more...
When I run my Splunk query, I get url field and the value of the field is like this https://location-server-aks-611ab294.test.australia.azm.io:443/api I would like to extract the words "location" and "server" from the above value. How can I accomplish this ?
I have a Correlation Search that ceased generating notable events without any sort of change or adjustment to the search itself. It fired a few weeks ago but hasn't fired since. The SPL itself runs w... See more...
I have a Correlation Search that ceased generating notable events without any sort of change or adjustment to the search itself. It fired a few weeks ago but hasn't fired since. The SPL itself runs when run manually from the search app, will send emails as a saved search, but simple refuses to generate notable events. I reviewed the internal logs and it shows that the search is running successfully but always with zero results. That's despite the fact that when run manually over the same time frame events are returned. The search has some regex, a few macros, lookups, stats, nothing out of the norm for my other searches that are firing reliably. It's run twice an hour on a cron. Again nothing out of the norm. Has anyone ever encountered and eventually over come this type of issue?
I was just curious about this since I couldn't find anything on it in the following page: https://docs.splunk.com/Documentation/Splunk/8.0.1/RESTREF/RESTcluster Thanks in advance.
Hi, How to enable network visibility on win server (NET app)?  Should I install java agent on win server? thx, HI
We are planning to deploy Python agent for python version 3.7. Could you please advice, what is the stable python agent version? On the AppD download site, I see only python version 4.3.5.13 is ava... See more...
We are planning to deploy Python agent for python version 3.7. Could you please advice, what is the stable python agent version? On the AppD download site, I see only python version 4.3.5.13 is available for download. It was last updated on October 18, 2017.  Your help will be appreciated.  Thanks, Qumrul 
Hello Folks, I am installing the Gigamon app on splunk and it requires the Splunk Stream app as well as the Add-on. I followed the instructions as provided in Readme file. When I rest... See more...
Hello Folks, I am installing the Gigamon app on splunk and it requires the Splunk Stream app as well as the Add-on. I followed the instructions as provided in Readme file. When I restart the app I found this log : Failed to start streamfwd, the process will be terminated: Incompatible configuration file (run pupgrade.py): /opt/splunk/etc/apps/Splunk_TA_stream/default/vocabularies/gigamon.xml Stream App version : 7.2.0 Gigamon app Version : 1.2.1 I copied the latest xml file over her to the Stream app as well as the TA : 7.1.1 and still didn't work. gigamon_vocabulary_7.0.1.xml gigamon_vocabulary_7.1.0.xml gigamon_vocabulary_7.1.1.xml Do you guys know if there's a new xml file for the newest version of Stream ? Please any help is appreciated...
I have been asked to install the puppet viewer application into our clustered environment. Reading the install information, it appears that it's speaking about a non-clustered environment. What wou... See more...
I have been asked to install the puppet viewer application into our clustered environment. Reading the install information, it appears that it's speaking about a non-clustered environment. What would be the instructions in a clustered environment and how can I make sure that pushing out new apps from my deployer to the search heads doesn't overwrite anything.
I have a few files with a ton of signatures indicating a malicious actor. The files consist of MD5 hashes, file sizes, filenames, and SHA256 hashes. I'd like to make a dashboard with reports ch... See more...
I have a few files with a ton of signatures indicating a malicious actor. The files consist of MD5 hashes, file sizes, filenames, and SHA256 hashes. I'd like to make a dashboard with reports checking for these indicators but there are hundreds of them and I don't want to hand jam. Is there a way to point to the file and have Splunk parse the documents to check for indicators?
All, I have a lookup, which I in turn want to do a couple aliases on. But doesn't seem to work. I get clienthost back, but the aliases don't. Any idea what I might be doing wrong here? ## ... See more...
All, I have a lookup, which I in turn want to do a couple aliases on. But doesn't seem to work. I get clienthost back, but the aliases don't. Any idea what I might be doing wrong here? ## Some DNS FIELDALIAS-real_ip_as_clientip = real_ip as clientip LOOKUP-dns = dnslookup clientip OUTPUT clienthost FIELDALIAS-clienthost_as_src_host = clienthost AS src_host FIELDALIAS-clienthost_as_src_dns = clienthost AS src_dns
Hi Folks Have an issue where some of my log entries contain null fields in which i need to populate in order to run stats against. From the csv dump below, dest_port is empty so i need to basic... See more...
Hi Folks Have an issue where some of my log entries contain null fields in which i need to populate in order to run stats against. From the csv dump below, dest_port is empty so i need to basically say: where rule=SSH-ACL, polulate empty dest_port field with a value of 22 where rule=NTP-ACL, polulate empty dest_port field with a value of 123 thanks in advance.
I came across different login pages for same instance. One is SSO enabled and another one is local authentication. What is it called ? and why is it required ?
Sometimes, especially over the weekends we need to suppress a large set of alerts. Is there a way to do it in bulk? meaning, to suppress a set of alerts and after some time to bring them back.
In Does TRUNCATE specify the ultimate size of an event? we looked at standard logging and we are good with TRUNCATE for the max line's length and MAX_EVENTS for max number of lines. We are trying... See more...
In Does TRUNCATE specify the ultimate size of an event? we looked at standard logging and we are good with TRUNCATE for the max line's length and MAX_EVENTS for max number of lines. We are trying to establish limits standards for our data and we don't know now how json type data fits into these limits.