All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there possibility to ingest logs from AWS Inspector v2 into Splunk?
Hello Splunkers!   Here is the case.  We already have clustered indexers on-premise and thinking to get an additional indexer server on our cloud system and adding to the index cluster on-premi... See more...
Hello Splunkers!   Here is the case.  We already have clustered indexers on-premise and thinking to get an additional indexer server on our cloud system and adding to the index cluster on-premise.   What would be the considerations and cons?    Thank you in advance
I've been using tstats in many queries that I run against accelerated data models, however most of the time I use it with a simple count() function in the following format:   | tstats prestats=tr... See more...
I've been using tstats in many queries that I run against accelerated data models, however most of the time I use it with a simple count() function in the following format:   | tstats prestats=true count AS count FROM datamodel=... WHERE ... BY ... | eval ... | lookup ... | stats count BY ...   This time, I will need to add sum() and values() functions to the tstats, but it seems I am unable to get it working. If someone could take a look at the queries and let me know what I am doing wrong, that would be great. The following query (using prestats=false option) works perfectly and produces output (i.e. the reason,  duration, sent and rcvd fields all have correct values).   | tstats prestats=false values(Traffic.reason), sum(Traffic.duration), sum(Traffic.sent), sum(Traffic.rcvd), count AS count FROM datamodel=Network_Log.Traffic BY _time span=auto | rename "values(Traffic.reason)" AS reason, "sum(Traffic.duration)" AS duration, "sum(Traffic.sent)" AS sent, "sum(Traffic.rcvd)" AS rcvd     When I try to re-write the above query with the prestats=true option and use stats to summarize on the prestats format,  the reason,  duration, sent, and rcvd fields are all null. The count field is calculated correctly and displayed in the statistics table.     | tstats prestats=true values(Traffic.reason), sum(Traffic.duration), sum(Traffic.sent), sum(Traffic.rcvd), count AS count FROM datamodel=Network_Log.Traffic BY _time span=auto | rename "values(Traffic.reason)" AS reason, "sum(Traffic.duration)" AS duration, "sum(Traffic.sent)" AS sent, "sum(Traffic.rcvd)" AS rcvd | stats values(reason) AS reason, sum(duration) AS duration, sum(sent) AS sent, sum(rcvd) AS rcvd, count by _time   By the way, I followed this excellent summary when I started to re-write my queries to tstats, and I think what I tried to do here is in line with the recommendations, i.e. I repeated the same functions in the stats  command that I use in tstats and used the same BY clause. https://community.splunk.com/t5/Splunk-Search/What-exactly-are-the-rules-requirements-for-using-quot-tstats/m-p/319801 Regards, Robert
I have some sources which can have significant (up to several hours) delay from the time of the event itself to the time the source transmits the event to my collector/forwarder/whatever there is. ... See more...
I have some sources which can have significant (up to several hours) delay from the time of the event itself to the time the source transmits the event to my collector/forwarder/whatever there is. After my splunk infrastructure receives the event, there is no significant delay (up to 30 seconds usually) from the receive time to indexing time so we won't get into that too deeply. The problem is that I can either have the ingest time/index time parsed out as _time (which of course completely messes up with any analytics regarding the "real life" events) or the event's internal time field, which prevents me from doing any stats on the actual transmission performance. I can of course "dig" the _indextime from the events (it's fascinating though that I can't display the _indextime directly but have to do some magic like evaluating other field to the _indextime value) but with dozens of millions events it's quite heavy on the system. I can of course do very light summary calculations against _time using tstats. But the problem is that tstats works with span only against _time field.  
Hi, struggling trying to count objects in a big json doc. I'm on version 8.0.5, so function json_keys is not available.        { "0": { "field1": "123" }, "1": { "field2": "123" }, "... See more...
Hi, struggling trying to count objects in a big json doc. I'm on version 8.0.5, so function json_keys is not available.        { "0": { "field1": "123" }, "1": { "field2": "123" }, "2": { "field3": "123" }, "3": { "field4": "123" }, "4": { "field5": "123" } }       This is a sample, I am able to get down to the path (startpath) with spath. What I'm trying to do is count the instances of the objects (0,1,2,3,4). I can't cleanly regex backwards as the real values names are not consistent.   Thought I could do something like startpath{} and list them out , but the wildcards {} are not working anyway I try it. Thoughts, suggestions?   Thanks   Chris
Hi, Need one field to be extracted or need a calculated field I have two fields that are auto-extracted (action and Severity) Values of action field = read, debug, modify Values of Severity fi... See more...
Hi, Need one field to be extracted or need a calculated field I have two fields that are auto-extracted (action and Severity) Values of action field = read, debug, modify Values of Severity field = SUCCESS and FAILURE Out of 100 logs 50 logs are having action field All logs are having Severity field Wherever action field is available in the logs, I want the same value, when there is no action field, I want the value of Severity to be disabled under action field NOTE: For all the logs Index/Sourcetype and source are same  
Hi    I'm trying to add a chart by using the below query, in chart lines Date is coming. But in x-axis shows only the Date field instead of Dates.       index=_internal FIELD1=* FIELD2=*... See more...
Hi    I'm trying to add a chart by using the below query, in chart lines Date is coming. But in x-axis shows only the Date field instead of Dates.       index=_internal FIELD1=* FIELD2=* | eval Date=(_time, "%d/%m/%y") | sort Date | eval Date=(_time, "%d/%m/%y") | table Date FIELD1 FIELD2      
I have two sets of IIS data (two sourcetypes) in a single index. One sourcetype logs web service requests, the other is a once-daily dump of the available services and their base URIs (app pools and ... See more...
I have two sets of IIS data (two sourcetypes) in a single index. One sourcetype logs web service requests, the other is a once-daily dump of the available services and their base URIs (app pools and apps). I have two working queries as shown below, and I'm trying to figure out how to combine them -- or if that is even possible. (Trouble is, I'm new to Splunk but I've been a programmer for decades -- trying to wrap my head around "the Splunk way"...) Returning servers and services with 25+ errors:     index=iis sourcetype=requests sc_status=503 | rex field=cs_uri_stem "(?i>(?<ServiceURI>^\S*\.svc\/)" | stats count by host,ServiceURI | where count > 25 | eval ErrCount=count,ErrHost=host | table ErrCount,ErrHost,ServiceURI     Results look like this: ErrCount ErrHost ServiceURI 106 server123 /app1/route1/filename1.svc/ 203 server456 /app2/filename2.svc/   Querying the once-daily service data is a bit more tricky -- each server can host multiple services, and each service can expose multiple base URIs which are stored as a comma-delimited list. So retrieving the service data for a specific server looks like this:     index=iis sourcetype=services earliest=-24h host=server456 | makemv delim="," RootURIs | mvexpand RootURIs | table AppName,RootURIs | where RootURIs != ""      Results look like this: AppName RootURIs pool2 /app2 pool2 /alsoapp2 pool2 /stillapp2/with/routes pool3 /app3 pool4 /app4   Both searches produce relatively low row-counts. I've seen the first produce maybe 15 or 20 hits max on a really bad 15 minute period (these will become alerts). The app info for a really large server might produce a maximum of maybe 200 root URIs. Both execute very quickly (a few seconds, tops). So my goals are: 1. feed the server names from the first search (503 errors) into the second search, 2. match the start of the first search's ServiceURIs to specific AppNames and RootURIs from the second search, 3. output the fields listed in the table statements in both searches Is this even remotely possible or realistic? I tried to do this via subsearches, but I think the subsearch has to produce just one result. With my developer and SQL backgrounds, a join was the obvious solution but I couldn't figure out how to get other fields (like the ServiceURI) into the joined search -- I thought I could use $ServiceURI$ but it didn't seem to work. I saw an intriguing comment that the splunk-way is to "collect all the events in one pass and then sort it out in later pipes with eval/stats and friends" but even if that's a good approach, I suspect the volume of data makes this a bad idea (millions of requests logged during a 15 minute window). Help an old code-monkey discover The Splunk Way!
Hi , Is there a way to create an alert for license consumption, without going to the license page on the controller can I just create an alert or query an API to send me an email when license consum... See more...
Hi , Is there a way to create an alert for license consumption, without going to the license page on the controller can I just create an alert or query an API to send me an email when license consumption is about to hit the limit? Thanks!
Dear Splunk community I need help with a presumably easy task, but it had already cost me quite a while. I'm trying to make a dynamic string substitution to insert specific parameters into specif... See more...
Dear Splunk community I need help with a presumably easy task, but it had already cost me quite a while. I'm trying to make a dynamic string substitution to insert specific parameters into specific place in string. in example:     | makeresults | eval message="blablabla [%2] blablabla [%1] blablabla [%3]" | eval param="param1:param2:param3"     Where %1 is the respective position in param string (colon separated) The resulting string would be (note that it is not in param index order): "blablabla [param2] blablabla [param1] blablabla [param3]" The number of parameters and indexes in message varies (usually from 1 to 4, but can also be none).   I've tried to split it into mv fields and make some multi value indexed substitution,  and then use a foreach statement or mvjoin but frankly i failed. I've also considered some hard regex work but i'm not even sure if its possible to work Please note that I am limited to Splunk Enterprise 6.5.10   Regards  MM  
Hello ! I have a SAP Java system that it is already monitored via Wily Instroscope. In the "configtool" there is "-javaagent" parameter that it refers to the "Agent.jar" file (path to the Wily agen... See more...
Hello ! I have a SAP Java system that it is already monitored via Wily Instroscope. In the "configtool" there is "-javaagent" parameter that it refers to the "Agent.jar" file (path to the Wily agent file). I cannot remove the Wily Instroscope configuration. What do I need to do to start Appdynamics Java Agent if it is not possible to set another "-javaagent" in the "configtool" ? Thanks Eugenio
Hello community, first I have to say that I'm very,very new to Splunk. Getting to Splunk is because of a solution I found in the streamboard community about analysis of OSCam logs. So I've install... See more...
Hello community, first I have to say that I'm very,very new to Splunk. Getting to Splunk is because of a solution I found in the streamboard community about analysis of OSCam logs. So I've installed Splunk on ubuntu and the OSCam-App from 'jotne' - works nice. Now knowing what Splunk does I thought about to analyse my routers syslog as well and came up with the TA-Tomato app. So I configured my router to send the syslog data to the UDP port like OSCam does. Data is stored in index = main; sourcetype = syslog - GREAT! Now I came to the very easy things mentioned in the README: - Please onboard your data as sourcetype=tomato - This app also assumes your data will exist in index=tomato This maybe is no issue for someone who is familiar with Splunk but for me it isn't. After two days of reading, trying to understand and testing, I didn't get this to work. I played around with some configuration I found here: https://community.splunk.com/t5/All-Apps-and-Add-ons/Unable-to-get-working-with-Tomato/m-p/223350 and ended with copy the files app.conf, props.conf, transforms.conf to the local directory. (is it right if a file exists in the local dir the one in default is ignored? - think so but dont know) I insert:   [host::192.168.0.1] TRANSFORMS-tomato = set_index_tomato,set_subtype_tomato   to the top of probs.conf and this:   [set_index_tomato} REGEX = . DEST_KEY = _MetaData:Index FORMAT = tomato [set_subtype_tomato] REGEX = 192.168.0.1 SOURCE_KEY = MetaData:Host FORMAT = sourcetype::tomato DEST_KEY = MetaData:Sourcetype   to the top of transforms.conf Sourcetype will work but index is still 'main'. So, what's wrong with my stupid idea. Thanks
Hello,  The below search displays  _time in human readable format when count  of the results =1 but in EPOCH format when count > 1.   How can i get it to display _time value in  human readable format... See more...
Hello,  The below search displays  _time in human readable format when count  of the results =1 but in EPOCH format when count > 1.   How can i get it to display _time value in  human readable format when count > 1 as well ?  Notice Rows number 2 ,4 and 5 in my results...     index=aws stats values(user_type), values(_time), values(eventName) count by user_name |rename values(*) as *      
お世話になります。 アラートのSPL内でcaseを使っており、その戻り値(AもしくはB)をフィールド「C」に代入し、フィールド「C」の値をアラートメールの件名に記載する設定を行っています。 )例  SPL(一部抜粋):| eval C=case(action == "allow" OR action == "alert", "A", action != "allow" AND action... See more...
お世話になります。 アラートのSPL内でcaseを使っており、その戻り値(AもしくはB)をフィールド「C」に代入し、フィールド「C」の値をアラートメールの件名に記載する設定を行っています。 )例  SPL(一部抜粋):| eval C=case(action == "allow" OR action == "alert", "A", action != "allow" AND action != "alert", "B")  件名+$result.C$ SPLの検索結果が1イベントであるときは問題ないのですが、複数のイベントが検知されて イベント毎に戻り値が異なる場合に想定通りの挙動にならず困っています。 )例  ・1つ目のイベント戻り値:B  ・2つ目のイベント戻り値:A  ⇒件名には「件名+B」が挿入される 以下のようにしたいのですが実現可否および方法についてご教示いただけますでしょうか。 ・SPLで検索されたイベント内で1件以上戻り値「A」が含まれている場合は「件名+A」にする ・SPLで検索されたイベント内に戻り値「A」が含まれていない場合は「件名+B」にする 以上、よろしくお願いいたします。
Greetings, I am trying to get different log types such as security and audit logs for example from a single IP source from my HF instance, how exactly should I be settings my settings in Inputs, Tra... See more...
Greetings, I am trying to get different log types such as security and audit logs for example from a single IP source from my HF instance, how exactly should I be settings my settings in Inputs, Transforms and Props conf in my HF to accomplish this? Thanks,  
A lot of heavy queries make the dashboard take up to a minute to load. And all queries a rerun when changing an option. Is there a way to add a submit button as in old dashboards?
We have some firewall devices sending data to one index previously. Now I have to create new index for some of the devices to send data through TCP port. I'm unable to find old index and I'm not sure... See more...
We have some firewall devices sending data to one index previously. Now I have to create new index for some of the devices to send data through TCP port. I'm unable to find old index and I'm not sure how to configure data to send to TCP port through splunk main server. Index is created in master node and i have provided bucket sizes but what should be done next? Please guide steps to configure as it is very important task for me.
Faced with the problem of consuming windows paging file by splunk universal forwarder. I didn't find a similar problem in the documentation and in the questions. What could be the reason? How to opti... See more...
Faced with the problem of consuming windows paging file by splunk universal forwarder. I didn't find a similar problem in the documentation and in the questions. What could be the reason? How to optimize? splunkforwarder v 8.0.4 OS: windows server 2008, 4 GB RAM, paging file dynamic
As a Splunk behavior when you bring a compressed file into Splunk I think I'm uncompressing a compressed file. When the compressed file is placed in the Universal Forwarder and Indexer (Heavy Forw... See more...
As a Splunk behavior when you bring a compressed file into Splunk I think I'm uncompressing a compressed file. When the compressed file is placed in the Universal Forwarder and Indexer (Heavy Forwarder) Is there any difference when importing a compressed file directly with? Specifically, the speed of import processing and the disk or spec usage rate will differ. If so, please let me know.  
Hi Community. I need help and advise. Trying to Build a Dashboard with GeoStats to point locations on the Map. We have Rapid 7 data and I would like to build dashboard with MAP Visualization. My pr... See more...
Hi Community. I need help and advise. Trying to Build a Dashboard with GeoStats to point locations on the Map. We have Rapid 7 data and I would like to build dashboard with MAP Visualization. My problem is that the Rapid7 data I am ingestion does not have latfield or longfield values. The only value is Site_Name with different sites across the globe. Is there a way to build GeoStats with only Site_Name value? Appreciate any help Regards