All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello community, first I have to say that I'm very,very new to Splunk. Getting to Splunk is because of a solution I found in the streamboard community about analysis of OSCam logs. So I've install... See more...
Hello community, first I have to say that I'm very,very new to Splunk. Getting to Splunk is because of a solution I found in the streamboard community about analysis of OSCam logs. So I've installed Splunk on ubuntu and the OSCam-App from 'jotne' - works nice. Now knowing what Splunk does I thought about to analyse my routers syslog as well and came up with the TA-Tomato app. So I configured my router to send the syslog data to the UDP port like OSCam does. Data is stored in index = main; sourcetype = syslog - GREAT! Now I came to the very easy things mentioned in the README: - Please onboard your data as sourcetype=tomato - This app also assumes your data will exist in index=tomato This maybe is no issue for someone who is familiar with Splunk but for me it isn't. After two days of reading, trying to understand and testing, I didn't get this to work. I played around with some configuration I found here: https://community.splunk.com/t5/All-Apps-and-Add-ons/Unable-to-get-working-with-Tomato/m-p/223350 and ended with copy the files app.conf, props.conf, transforms.conf to the local directory. (is it right if a file exists in the local dir the one in default is ignored? - think so but dont know) I insert:   [host::192.168.0.1] TRANSFORMS-tomato = set_index_tomato,set_subtype_tomato   to the top of probs.conf and this:   [set_index_tomato} REGEX = . DEST_KEY = _MetaData:Index FORMAT = tomato [set_subtype_tomato] REGEX = 192.168.0.1 SOURCE_KEY = MetaData:Host FORMAT = sourcetype::tomato DEST_KEY = MetaData:Sourcetype   to the top of transforms.conf Sourcetype will work but index is still 'main'. So, what's wrong with my stupid idea. Thanks
Hello,  The below search displays  _time in human readable format when count  of the results =1 but in EPOCH format when count > 1.   How can i get it to display _time value in  human readable format... See more...
Hello,  The below search displays  _time in human readable format when count  of the results =1 but in EPOCH format when count > 1.   How can i get it to display _time value in  human readable format when count > 1 as well ?  Notice Rows number 2 ,4 and 5 in my results...     index=aws stats values(user_type), values(_time), values(eventName) count by user_name |rename values(*) as *      
お世話になります。 アラートのSPL内でcaseを使っており、その戻り値(AもしくはB)をフィールド「C」に代入し、フィールド「C」の値をアラートメールの件名に記載する設定を行っています。 )例  SPL(一部抜粋):| eval C=case(action == "allow" OR action == "alert", "A", action != "allow" AND action... See more...
お世話になります。 アラートのSPL内でcaseを使っており、その戻り値(AもしくはB)をフィールド「C」に代入し、フィールド「C」の値をアラートメールの件名に記載する設定を行っています。 )例  SPL(一部抜粋):| eval C=case(action == "allow" OR action == "alert", "A", action != "allow" AND action != "alert", "B")  件名+$result.C$ SPLの検索結果が1イベントであるときは問題ないのですが、複数のイベントが検知されて イベント毎に戻り値が異なる場合に想定通りの挙動にならず困っています。 )例  ・1つ目のイベント戻り値:B  ・2つ目のイベント戻り値:A  ⇒件名には「件名+B」が挿入される 以下のようにしたいのですが実現可否および方法についてご教示いただけますでしょうか。 ・SPLで検索されたイベント内で1件以上戻り値「A」が含まれている場合は「件名+A」にする ・SPLで検索されたイベント内に戻り値「A」が含まれていない場合は「件名+B」にする 以上、よろしくお願いいたします。
Greetings, I am trying to get different log types such as security and audit logs for example from a single IP source from my HF instance, how exactly should I be settings my settings in Inputs, Tra... See more...
Greetings, I am trying to get different log types such as security and audit logs for example from a single IP source from my HF instance, how exactly should I be settings my settings in Inputs, Transforms and Props conf in my HF to accomplish this? Thanks,  
A lot of heavy queries make the dashboard take up to a minute to load. And all queries a rerun when changing an option. Is there a way to add a submit button as in old dashboards?
We have some firewall devices sending data to one index previously. Now I have to create new index for some of the devices to send data through TCP port. I'm unable to find old index and I'm not sure... See more...
We have some firewall devices sending data to one index previously. Now I have to create new index for some of the devices to send data through TCP port. I'm unable to find old index and I'm not sure how to configure data to send to TCP port through splunk main server. Index is created in master node and i have provided bucket sizes but what should be done next? Please guide steps to configure as it is very important task for me.
Faced with the problem of consuming windows paging file by splunk universal forwarder. I didn't find a similar problem in the documentation and in the questions. What could be the reason? How to opti... See more...
Faced with the problem of consuming windows paging file by splunk universal forwarder. I didn't find a similar problem in the documentation and in the questions. What could be the reason? How to optimize? splunkforwarder v 8.0.4 OS: windows server 2008, 4 GB RAM, paging file dynamic
As a Splunk behavior when you bring a compressed file into Splunk I think I'm uncompressing a compressed file. When the compressed file is placed in the Universal Forwarder and Indexer (Heavy Forw... See more...
As a Splunk behavior when you bring a compressed file into Splunk I think I'm uncompressing a compressed file. When the compressed file is placed in the Universal Forwarder and Indexer (Heavy Forwarder) Is there any difference when importing a compressed file directly with? Specifically, the speed of import processing and the disk or spec usage rate will differ. If so, please let me know.  
Hi Community. I need help and advise. Trying to Build a Dashboard with GeoStats to point locations on the Map. We have Rapid 7 data and I would like to build dashboard with MAP Visualization. My pr... See more...
Hi Community. I need help and advise. Trying to Build a Dashboard with GeoStats to point locations on the Map. We have Rapid 7 data and I would like to build dashboard with MAP Visualization. My problem is that the Rapid7 data I am ingestion does not have latfield or longfield values. The only value is Site_Name with different sites across the globe. Is there a way to build GeoStats with only Site_Name value? Appreciate any help Regards
Hello everyone,    I need  query to find out  sourcetype =gshshsh is using how much of data   1. From February month day wise usage.  and 2. From September to February month data  usage
Hi All, Is there any search query to find out the configurations for any particular app or index using splunk web UI?
Hi All,  Can someone please help me in masking data and regex? currently, we have an event where I need to mask certain data in a field extraction. I have already worked on the basic regex forSampl... See more...
Hi All,  Can someone please help me in masking data and regex? currently, we have an event where I need to mask certain data in a field extraction. I have already worked on the basic regex forSample1 | rex field=_raw "("PAE"\/)(?<Mask_Data>\d+\W\w+\d\s)"   but I am looking for a common or a separate regex for all the below samples and I want the events but mask the numbers before " : : " and after /  I am good I can get only the numbers masked in the tail.   EVENT Samples 1)  Request_URL=ghghghghghhghghghhghg/eeeee/xxx/functionalPAE/188888/WWEE1112: : 2) Request_URL=ghghghghghhghghghhghg/eeeee/xxx/functionalAssessment/188888/EEE3456823947 : : 3)Request_URL=ghghghghghhghghghhghg/eeeee/xxx/functionalAssessmentFromEEF/11111233 : : 4) Request_URL=ghghghghghhghghghhghg/eeeee/xxx/functionalAssessmentFromservices/1333/11233 : : Thanks in advance.
I know this is available at an application level, but is there a way to do it at a tier level so other tiers in the application are not affected? Or is there a cunning workaround where we could disa... See more...
I know this is available at an application level, but is there a way to do it at a tier level so other tiers in the application are not affected? Or is there a cunning workaround where we could disable it for all and then have a health rule so it still fires for the other tiers? Thanks! Jeremy.
Hello everyone,  We are using the Ta_nix add-on to get some logs from the Linux servers. But we notice that at the Monitor console when we run the Health Check we get this Alert That index... See more...
Hello everyone,  We are using the Ta_nix add-on to get some logs from the Linux servers. But we notice that at the Monitor console when we run the Health Check we get this Alert That index comes from that specific app and looks like is generation a lot of sourcetypes. I checked the documentation and I cannot see it as a know issue.  So I would like to know if this is an expected behavior or if there is any way we can fix this.  Splunk Enterprise: 8.2.2 - over x86_64 x86_64 GNU/Linux Splunk_TA_nix : 8.3.1   Thank you in advance
The basic issue I faced was a dashboard with prominent single-value visualisation what was to display a count of exceptions.  The users wanted 0 exceptions to be "good" color and a range of colors af... See more...
The basic issue I faced was a dashboard with prominent single-value visualisation what was to display a count of exceptions.  The users wanted 0 exceptions to be "good" color and a range of colors after that. To demonstrate, here is a simple test dashboard making use fo the excellent features of single-value viz.   <form> <label>test single value viz</label> <fieldset submitButton="false"> <input type="text" token="limit"> <label>limit</label> <default>2</default> <initialValue>2</initialValue> </input> </fieldset> <row> <panel> <single> <search> <query>| gentimes start=1/25/2022 end=1/26/2022 increment=1h | eval count=random()%$limit$ | eval _time=starttime | table _time count | timechart span=6h sum(count) as count</query> <earliest>-1h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="rangeValues">[0,30,70,100]</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> </single> </panel> </row> </form>   Default limit of 2 will result in a viz showing a lovely blue background and some values and trendline depending on the random data generated. limit 20 will produce most likely an orange background limit 200 a red background All this is expected and in accordance with the default viz that was produced by using the "Save As Dashboard Panel" option from the base window. A limit of 1 - which results in all data values of 0 gives the green background.  This is still expected. Where I struggle is the limit of 0 (or less) which will give no data as number % 0 is undefined.  The data for such a search has no values in the count column.    So what to do?  The single value viz has decided that null values are nearer max value than min value which makes sense if you use dafault colors because max value is colored red.  But if in your situation your low values are more abberent and you consider null values are abberations you'd want to have the nulls colored like your min value.  Also strange though si the value on the chart shows 0, even if all the values in the data set are null.  Suddenly null became 0 and not undefined, and thus 0 is treated as higher than max instead of lower than min.  I find this to be a mistake - either it's treated as 0 so color it as 0 and show it as 0 or it's treated as null so colour it as null and show it as null (or undefinied or something other than 0) The only workaround I could find (without looking at css chnages) is a bit ugly and may not suit all situations. I cludge the upper limit to some value "higher than I could ever reach" (famous last words) and stick the colour I want to display for no data there.   <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41","0x53a051"]</option> <option name="rangeValues">[0,30,70,100,100000000]</option>   In the real world situation I had, zero values was considered good, and no data at all is also good so the quick fix of the viz as above was enough to allow users to visualise the date. A better solution is to change the base search to something that always returned a 0  rather than null or add a line after the timechart to force nulls to an acceptable value.   I like the latter as it's far more clear what's going on.   | eval count=coalesce(count ,0)   When no data at all is returned by base search (as happened in my real world case) it can be handled the normal way with hidden panel to display when no data returned.  Side note on this: I usually have a panel that displays when there is no data for the base search but there is some data in the index/sourcetype and a different panel when there is no data at all.  This is because on rare occasions you may have a problem with a forwarder or any number of other reasons resulting in events taking longer than expected to appear in an index.  Letting user know this is the case rather than assuming "all's good" is better in my view. In real world data it's ugly to manipulate source into vizualisation just to make it look right.  Sometimes we have to, but here I think the single-value vizualisation needs an option to let the user decide how to display missing or null values.  
I'm using Splunk Enterprise 8.2.4 and I would like to start getting my Windows Forwarder Estate (8.2.4) to send it's perform. Initially I thought this would be easy but I was wrong. I though that out... See more...
I'm using Splunk Enterprise 8.2.4 and I would like to start getting my Windows Forwarder Estate (8.2.4) to send it's perform. Initially I thought this would be easy but I was wrong. I though that out of the box that Splunk would allow me collect Windows perfmon data straight to a metrics index.  I think from reading the guide here that the pattern is as follows: Configure the forwarder inputs stanza as normal i.e. as you would to collect say the CPU metrics to an events index Point it at a metrics index tagged with a custom sourcetype Transform/parse the event to metrics format at the indexer when received based on sourcetype Is this understanding correct and of so does anyone have a bundle of Transforms ready to go (perhaps a TA or app that does this like Splunk Add-on for Microsoft Windows | Splunkbase )?
I have a Splunk On Call webhook that is using a POST request to send data to my index and sourcetype. Anytime a user enters a chat message for an incident, it will fire the webhook and data immediate... See more...
I have a Splunk On Call webhook that is using a POST request to send data to my index and sourcetype. Anytime a user enters a chat message for an incident, it will fire the webhook and data immediately gets added to that sourcetype. My issue: The raw events in the index and sourcetype show one event. However, when I table data, the values in each field gets duplicated with the same data as a multivalue field. Based on other Splunk Community questions, I've made some changes to the sourcetype settings: [mysourcetype] AUTO_KV_JSON = false INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Structured disabled = false pulldowntype = true This did not fix the issue like it has for others. I have tried creating sourcetypes a few different ways: 1. Going into Settings > Sourcetypes > selecting "New Source Type" and updating the settings. 2. Cloning _json sourcetype that Splunk has so I can keep the settings, but am still getting duplicate values when I table. 3 Going into Settings > Data Inputs > HTTP Event Collector > selecting "New Token" > creating a new sourcetype in "Input Settings"   I also noticed that the json events does not highlight syntax by default. Is this due to the KV_MODE being set to none? Can I set it to json without duplicating my data?
Hello all, In our company I need to create a daily email notification for  Remote logn Disabled account Event Log Stopped or Cleared Search Account Lockout  Please suggest which windows even... See more...
Hello all, In our company I need to create a daily email notification for  Remote logn Disabled account Event Log Stopped or Cleared Search Account Lockout  Please suggest which windows event correspond to above alerts. From my reading I think 4624, 4725, 1102, 4740 are the windows event IDs that I need to monitor but not sure. Thank you
I have a field whose value ranges from 0 to 20. I want to plot the graph to find the range of values being hit for the field every day. I tried using timechart but instead of it giving me ranges pe... See more...
I have a field whose value ranges from 0 to 20. I want to plot the graph to find the range of values being hit for the field every day. I tried using timechart but instead of it giving me ranges per day it starts building out graphs per value, like value 1 occurred on day1 ,day 2, day 4. I need it to tell me what all values occurred on a particular day rather than what days have those values.   index=a $search string$ | eval bytes=bytes/1000000 | timechart count by bytes   Hope I could explain what I am trying here..
I am running into an issue when I am trying to get a chart to populate with the data as I am expecting. I am running a search where the data is from IIS logs where it parsing out the referrer_stem ... See more...
I am running into an issue when I am trying to get a chart to populate with the data as I am expecting. I am running a search where the data is from IIS logs where it parsing out the referrer_stem  and then counting the total of each referrer_stem per month.  I am also splitting out the month field by both the shortname and numerical value (for testing each on the sort). this is the end portion of my search:     | eval date_month=strftime(_time, "%b") | eval number_month=strftime(_time, "%m") | chart count BY referrer_stem, date_month | sort 10 - count     The issue I am having is if I do this with date_month field then it shows columns or bars out of order (i.e. it shows as Feb Jan) where as if I do it by number_month it is correct (i.e. 01 02).  I want it to show in the correct order but using the month's shortname. I did try to use a case statement when using number_month but that doesn't work because after the chart command the field name seems to not exist (or I just don't know how to access the right name). Any help  or insight on this would be greatly appreciated.