All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi , I am facing a weird issue - where on a Splunk indexer I am trying to filter out log events using props and transforms file.  I have noticed that filtering of log events works perfectly fine ... See more...
Hi , I am facing a weird issue - where on a Splunk indexer I am trying to filter out log events using props and transforms file.  I have noticed that filtering of log events works perfectly fine for sourcetypes which are not defined or do not exsist in Splunk default config. For example Okta, Jenkins,fluentd etc. As soon as I try to filter IIS/Catalina sourcetype - it never works .  For example this is my props - which is filtering journald sourcetype but not iis Props  [iis] TRANSFORMS-routing = setnull [journald] TRANSFORMS-routing = setnull Transforms  [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue  
Does anyone know where I can find some already created Splunk use cases for github webhook logs? I am having a really hard time googling for a dump of github based splunk searches because of the ke... See more...
Does anyone know where I can find some already created Splunk use cases for github webhook logs? I am having a really hard time googling for a dump of github based splunk searches because of the keyword github. I am trying to look for commits in github with no approvals. I have identified the search for all commits and the search for finding approvals for those commits but I am unsure how to stich them together in a single query to produce actionable results. The commit log and the approval log are separate logs but both have a unique identifier for the commit. More info: Here is the query for the approval and the corresponding log. These logs are heavily redacted and I am only including what is relevant. Logs come in through HEC so they are JSON. index=github action=submitted review.state=approved pull_request.head.sha!="" { action: submitted pull_request: { head: { sha: <commit-id> } } review: { state: approved } } Here is the log of the merge, it has no action so I'm using this query: index=github after!="" { after: <commit-id> before: <previous-commit-id> enterprise: {} head_commit: {} organization: {} pusher: {} repository: {} sender: {} } I've been trying to create a table that includes both of these logs with no luck. index=github after!="" [search index=github action=submitted review.state=approved pull_request.head.sha!="" |table pull_request.head.sha review.state | rename pull_request.head.sha as commit-id] |table after |rename after as commit-id So I am essentially looking for commit logs with no approval and trying to link the tables together with after/pull_request.head.sha as both of these values are unique commit ID's. Ideally I would want to alert on each occurrence of an unapproved merge.
Below is the usual Splunk Search line in addressVal is not equal to outAddressVal. I tried below Search but it did not help index= * addressVal outAddressVal| where (rtrim(ltrim('addressVal ')) ... See more...
Below is the usual Splunk Search line in addressVal is not equal to outAddressVal. I tried below Search but it did not help index= * addressVal outAddressVal| where (rtrim(ltrim('addressVal ')) != rtrim(ltrim('outAddressVal'))) Content line is like below addressVal = WV ,outAddressVal= RA addressVal = CA,outAddressVal= RA addressVal = WV ,outAddressVal= RA addressVal = WV ,outAddressVal= RA
We are about to start ingesting Windows process command line arguments. Within the Microsoft article, it states that "Command line arguments can contain sensitive or private information such as passw... See more...
We are about to start ingesting Windows process command line arguments. Within the Microsoft article, it states that "Command line arguments can contain sensitive or private information such as passwords or user data." How did anyone resolve this? Did you just restrict who can open the security logs? Did you clear the security logs after a certain timeframe?
Hello everyone,  My client wants to have access from the monitor console and HF to update certain apps and so on. Each time they try to do a connection they get the message:  Application close by... See more...
Hello everyone,  My client wants to have access from the monitor console and HF to update certain apps and so on. Each time they try to do a connection they get the message:  Application close by the peer 02-13-2022 01:00:01.238 -0500 ERROR ApplicationUpdater [2040833 ApplicationUpdateThread] - Error checking for update, URL=https://apps.splunk.com/api/apps:resolve/checkforupgrade: Connection closed by peer If I ran a curl from the server I got connection established: So it's not a firewall issue. Do I have to configure something at the splunk side?  Splunk Enterprise: 8.2.2 - over x86_64 x86_64 GNU/Linux Thank you for your help.
Hey everyone! I've spent a good few hours here learning the basics of creating custom packages to load into our Splunk Cloud instance. In the process, I've started playing around with Splunk AppIns... See more...
Hey everyone! I've spent a good few hours here learning the basics of creating custom packages to load into our Splunk Cloud instance. In the process, I've started playing around with Splunk AppInspect and Splunk Packaging Toolkit. Thing is, these were bombing hard for me due to trying to build out of my git working directory. Every file in .git basically triggered the validator's fail state. Eventually I found an answer in creating a .slimignore file. The only reference I could find to this file was here: https://dev.splunk.com/enterprise/reference/packagingtoolkit/packagingtoolkitcli#slim-package The manual for slim also mentioned that this should be in the root of the app's development folder. So this leads me to a fairly basic question: Why isn't the standard .git structure included in the default ignore file? It seems this would make overall development easier.. (Also interesting that it says /local is ignored, but the actual ignore file in the library only lists Python, Jetbrain, OSX, and Windows Thumbnail files to ignore...)
We're deploying the Windows Universal Forwarder add-on to our environment and are using a gMSA. We have configured the basic permissions outlined here Choose the Windows user Splunk Ent... See more...
We're deploying the Windows Universal Forwarder add-on to our environment and are using a gMSA. We have configured the basic permissions outlined here Choose the Windows user Splunk Enterprise should run as - Splunk Documentation. While we are now getting event log data ingested into Splunk Enterprise we do not see all the event log data. I believe we're missing Security. Is there any extra security permissions we're missing?    
Hi I want to understand how the _time set using App: Microsoft Azure Add-on for Splunk source type azure:eventhub cat ./etc/apps/TA-MS-AAD/default/props.conf [azure:eventhub] SHOULD_LINEMERGE... See more...
Hi I want to understand how the _time set using App: Microsoft Azure Add-on for Splunk source type azure:eventhub cat ./etc/apps/TA-MS-AAD/default/props.conf [azure:eventhub] SHOULD_LINEMERGE = 0 category = Splunk App Add-on Builder pulldown_type = 1 #################### # Metrics ####################   [splunk@ilissplsh04 ~]$ cat ./etc/apps/TA-MS-AAD/local/props.conf [azure:eventhub] TRUNCATE=0 [splunk@ilissplsh04 ~]$ I got an event with old _time even the event got indexed today ( indextime)
Is there possibility to ingest logs from AWS Inspector v2 into Splunk?
Hello Splunkers!   Here is the case.  We already have clustered indexers on-premise and thinking to get an additional indexer server on our cloud system and adding to the index cluster on-premi... See more...
Hello Splunkers!   Here is the case.  We already have clustered indexers on-premise and thinking to get an additional indexer server on our cloud system and adding to the index cluster on-premise.   What would be the considerations and cons?    Thank you in advance
I've been using tstats in many queries that I run against accelerated data models, however most of the time I use it with a simple count() function in the following format:   | tstats prestats=tr... See more...
I've been using tstats in many queries that I run against accelerated data models, however most of the time I use it with a simple count() function in the following format:   | tstats prestats=true count AS count FROM datamodel=... WHERE ... BY ... | eval ... | lookup ... | stats count BY ...   This time, I will need to add sum() and values() functions to the tstats, but it seems I am unable to get it working. If someone could take a look at the queries and let me know what I am doing wrong, that would be great. The following query (using prestats=false option) works perfectly and produces output (i.e. the reason,  duration, sent and rcvd fields all have correct values).   | tstats prestats=false values(Traffic.reason), sum(Traffic.duration), sum(Traffic.sent), sum(Traffic.rcvd), count AS count FROM datamodel=Network_Log.Traffic BY _time span=auto | rename "values(Traffic.reason)" AS reason, "sum(Traffic.duration)" AS duration, "sum(Traffic.sent)" AS sent, "sum(Traffic.rcvd)" AS rcvd     When I try to re-write the above query with the prestats=true option and use stats to summarize on the prestats format,  the reason,  duration, sent, and rcvd fields are all null. The count field is calculated correctly and displayed in the statistics table.     | tstats prestats=true values(Traffic.reason), sum(Traffic.duration), sum(Traffic.sent), sum(Traffic.rcvd), count AS count FROM datamodel=Network_Log.Traffic BY _time span=auto | rename "values(Traffic.reason)" AS reason, "sum(Traffic.duration)" AS duration, "sum(Traffic.sent)" AS sent, "sum(Traffic.rcvd)" AS rcvd | stats values(reason) AS reason, sum(duration) AS duration, sum(sent) AS sent, sum(rcvd) AS rcvd, count by _time   By the way, I followed this excellent summary when I started to re-write my queries to tstats, and I think what I tried to do here is in line with the recommendations, i.e. I repeated the same functions in the stats  command that I use in tstats and used the same BY clause. https://community.splunk.com/t5/Splunk-Search/What-exactly-are-the-rules-requirements-for-using-quot-tstats/m-p/319801 Regards, Robert
I have some sources which can have significant (up to several hours) delay from the time of the event itself to the time the source transmits the event to my collector/forwarder/whatever there is. ... See more...
I have some sources which can have significant (up to several hours) delay from the time of the event itself to the time the source transmits the event to my collector/forwarder/whatever there is. After my splunk infrastructure receives the event, there is no significant delay (up to 30 seconds usually) from the receive time to indexing time so we won't get into that too deeply. The problem is that I can either have the ingest time/index time parsed out as _time (which of course completely messes up with any analytics regarding the "real life" events) or the event's internal time field, which prevents me from doing any stats on the actual transmission performance. I can of course "dig" the _indextime from the events (it's fascinating though that I can't display the _indextime directly but have to do some magic like evaluating other field to the _indextime value) but with dozens of millions events it's quite heavy on the system. I can of course do very light summary calculations against _time using tstats. But the problem is that tstats works with span only against _time field.  
Hi, struggling trying to count objects in a big json doc. I'm on version 8.0.5, so function json_keys is not available.        { "0": { "field1": "123" }, "1": { "field2": "123" }, "... See more...
Hi, struggling trying to count objects in a big json doc. I'm on version 8.0.5, so function json_keys is not available.        { "0": { "field1": "123" }, "1": { "field2": "123" }, "2": { "field3": "123" }, "3": { "field4": "123" }, "4": { "field5": "123" } }       This is a sample, I am able to get down to the path (startpath) with spath. What I'm trying to do is count the instances of the objects (0,1,2,3,4). I can't cleanly regex backwards as the real values names are not consistent.   Thought I could do something like startpath{} and list them out , but the wildcards {} are not working anyway I try it. Thoughts, suggestions?   Thanks   Chris
Hi, Need one field to be extracted or need a calculated field I have two fields that are auto-extracted (action and Severity) Values of action field = read, debug, modify Values of Severity fi... See more...
Hi, Need one field to be extracted or need a calculated field I have two fields that are auto-extracted (action and Severity) Values of action field = read, debug, modify Values of Severity field = SUCCESS and FAILURE Out of 100 logs 50 logs are having action field All logs are having Severity field Wherever action field is available in the logs, I want the same value, when there is no action field, I want the value of Severity to be disabled under action field NOTE: For all the logs Index/Sourcetype and source are same  
Hi    I'm trying to add a chart by using the below query, in chart lines Date is coming. But in x-axis shows only the Date field instead of Dates.       index=_internal FIELD1=* FIELD2=*... See more...
Hi    I'm trying to add a chart by using the below query, in chart lines Date is coming. But in x-axis shows only the Date field instead of Dates.       index=_internal FIELD1=* FIELD2=* | eval Date=(_time, "%d/%m/%y") | sort Date | eval Date=(_time, "%d/%m/%y") | table Date FIELD1 FIELD2      
I have two sets of IIS data (two sourcetypes) in a single index. One sourcetype logs web service requests, the other is a once-daily dump of the available services and their base URIs (app pools and ... See more...
I have two sets of IIS data (two sourcetypes) in a single index. One sourcetype logs web service requests, the other is a once-daily dump of the available services and their base URIs (app pools and apps). I have two working queries as shown below, and I'm trying to figure out how to combine them -- or if that is even possible. (Trouble is, I'm new to Splunk but I've been a programmer for decades -- trying to wrap my head around "the Splunk way"...) Returning servers and services with 25+ errors:     index=iis sourcetype=requests sc_status=503 | rex field=cs_uri_stem "(?i>(?<ServiceURI>^\S*\.svc\/)" | stats count by host,ServiceURI | where count > 25 | eval ErrCount=count,ErrHost=host | table ErrCount,ErrHost,ServiceURI     Results look like this: ErrCount ErrHost ServiceURI 106 server123 /app1/route1/filename1.svc/ 203 server456 /app2/filename2.svc/   Querying the once-daily service data is a bit more tricky -- each server can host multiple services, and each service can expose multiple base URIs which are stored as a comma-delimited list. So retrieving the service data for a specific server looks like this:     index=iis sourcetype=services earliest=-24h host=server456 | makemv delim="," RootURIs | mvexpand RootURIs | table AppName,RootURIs | where RootURIs != ""      Results look like this: AppName RootURIs pool2 /app2 pool2 /alsoapp2 pool2 /stillapp2/with/routes pool3 /app3 pool4 /app4   Both searches produce relatively low row-counts. I've seen the first produce maybe 15 or 20 hits max on a really bad 15 minute period (these will become alerts). The app info for a really large server might produce a maximum of maybe 200 root URIs. Both execute very quickly (a few seconds, tops). So my goals are: 1. feed the server names from the first search (503 errors) into the second search, 2. match the start of the first search's ServiceURIs to specific AppNames and RootURIs from the second search, 3. output the fields listed in the table statements in both searches Is this even remotely possible or realistic? I tried to do this via subsearches, but I think the subsearch has to produce just one result. With my developer and SQL backgrounds, a join was the obvious solution but I couldn't figure out how to get other fields (like the ServiceURI) into the joined search -- I thought I could use $ServiceURI$ but it didn't seem to work. I saw an intriguing comment that the splunk-way is to "collect all the events in one pass and then sort it out in later pipes with eval/stats and friends" but even if that's a good approach, I suspect the volume of data makes this a bad idea (millions of requests logged during a 15 minute window). Help an old code-monkey discover The Splunk Way!
Hi , Is there a way to create an alert for license consumption, without going to the license page on the controller can I just create an alert or query an API to send me an email when license consum... See more...
Hi , Is there a way to create an alert for license consumption, without going to the license page on the controller can I just create an alert or query an API to send me an email when license consumption is about to hit the limit? Thanks!
Dear Splunk community I need help with a presumably easy task, but it had already cost me quite a while. I'm trying to make a dynamic string substitution to insert specific parameters into specif... See more...
Dear Splunk community I need help with a presumably easy task, but it had already cost me quite a while. I'm trying to make a dynamic string substitution to insert specific parameters into specific place in string. in example:     | makeresults | eval message="blablabla [%2] blablabla [%1] blablabla [%3]" | eval param="param1:param2:param3"     Where %1 is the respective position in param string (colon separated) The resulting string would be (note that it is not in param index order): "blablabla [param2] blablabla [param1] blablabla [param3]" The number of parameters and indexes in message varies (usually from 1 to 4, but can also be none).   I've tried to split it into mv fields and make some multi value indexed substitution,  and then use a foreach statement or mvjoin but frankly i failed. I've also considered some hard regex work but i'm not even sure if its possible to work Please note that I am limited to Splunk Enterprise 6.5.10   Regards  MM  
Hello ! I have a SAP Java system that it is already monitored via Wily Instroscope. In the "configtool" there is "-javaagent" parameter that it refers to the "Agent.jar" file (path to the Wily agen... See more...
Hello ! I have a SAP Java system that it is already monitored via Wily Instroscope. In the "configtool" there is "-javaagent" parameter that it refers to the "Agent.jar" file (path to the Wily agent file). I cannot remove the Wily Instroscope configuration. What do I need to do to start Appdynamics Java Agent if it is not possible to set another "-javaagent" in the "configtool" ? Thanks Eugenio
Hello community, first I have to say that I'm very,very new to Splunk. Getting to Splunk is because of a solution I found in the streamboard community about analysis of OSCam logs. So I've install... See more...
Hello community, first I have to say that I'm very,very new to Splunk. Getting to Splunk is because of a solution I found in the streamboard community about analysis of OSCam logs. So I've installed Splunk on ubuntu and the OSCam-App from 'jotne' - works nice. Now knowing what Splunk does I thought about to analyse my routers syslog as well and came up with the TA-Tomato app. So I configured my router to send the syslog data to the UDP port like OSCam does. Data is stored in index = main; sourcetype = syslog - GREAT! Now I came to the very easy things mentioned in the README: - Please onboard your data as sourcetype=tomato - This app also assumes your data will exist in index=tomato This maybe is no issue for someone who is familiar with Splunk but for me it isn't. After two days of reading, trying to understand and testing, I didn't get this to work. I played around with some configuration I found here: https://community.splunk.com/t5/All-Apps-and-Add-ons/Unable-to-get-working-with-Tomato/m-p/223350 and ended with copy the files app.conf, props.conf, transforms.conf to the local directory. (is it right if a file exists in the local dir the one in default is ignored? - think so but dont know) I insert:   [host::192.168.0.1] TRANSFORMS-tomato = set_index_tomato,set_subtype_tomato   to the top of probs.conf and this:   [set_index_tomato} REGEX = . DEST_KEY = _MetaData:Index FORMAT = tomato [set_subtype_tomato] REGEX = 192.168.0.1 SOURCE_KEY = MetaData:Host FORMAT = sourcetype::tomato DEST_KEY = MetaData:Sourcetype   to the top of transforms.conf Sourcetype will work but index is still 'main'. So, what's wrong with my stupid idea. Thanks