All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I would like to compare the data of the previous month to the month before (i.e. now its October, so the default search will show September and August line chart in one panel). In the edit sear... See more...
Hi, I would like to compare the data of the previous month to the month before (i.e. now its October, so the default search will show September and August line chart in one panel). In the edit search portion of the dashboard, I've set the Time Range to 'Tokens' and Earliest Token to "@mon' and Latest Token to '@mon-1'   May I know how I can include the chart for  Earliest Token of "@mon-1' and Latest Token to '@mon-1' within the same search/panel? Thanks.     
Hi Splunkers! I'm actually working on root privilege escalations with linux logs and I have limited experience with it. Can anyone please help me to give some knowledge over what Linux logs should... See more...
Hi Splunkers! I'm actually working on root privilege escalations with linux logs and I have limited experience with it. Can anyone please help me to give some knowledge over what Linux logs should I work on? and How I should proceed with it. Thanks in advance.  
I am very new to Splunk. I have an access.log file, which contains the Url and  querystring: url                                                   queryString http://host/getOrder             id=1... See more...
I am very new to Splunk. I have an access.log file, which contains the Url and  querystring: url                                                   queryString http://host/getOrder             id=1&id=2&id=3 http://host/getUser               id=1&id=2  http://host/getUser              id=2&id=3  How could I  count the url using the occurrence of "id" in the queryString? So the result I want would be Url                                             IdCount http://host/getOrder        3 http://host/getUser           4   Thanks in advance
hi, i am quite new to splunk and figuring out ways to add things to a simple dashboard i am building. i need to ad a tooltip to a particular column in my dashboard table. is there any way where i can... See more...
hi, i am quite new to splunk and figuring out ways to add things to a simple dashboard i am building. i need to ad a tooltip to a particular column in my dashboard table. is there any way where i can add a tooltip with using just simple xml file. i cant use bootstrap as i do not have the access to those files on server. any help would be appreciated. thanks in advance.
I'm trying to find an elegant solution to compare the results of multiple searches - all of which have  identical field names. I need to compare all values in the same fields to each other and if a c... See more...
I'm trying to find an elegant solution to compare the results of multiple searches - all of which have  identical field names. I need to compare all values in the same fields to each other and if a condition is met, display the fields in the event. I have seen a lot of examples that show how to do this using different field names. However, I don't see a clear way to loop through the index of the same fields in all events.  The only way I have been able to find that can work (and it is highly undesirable) is to rename the fields of each search and then comparing them. For instance: say search1 returns: 'day_of_week', 'color', 'shape', 'distance' one would need to rename the fields in search2 to :'day_of_week2', 'color2', 'shape2', 'distance2' to compare them. If you have a lot of fields (which I do) this gets complex and error prone fast. Is there a way to assign a reference to a search assign a reference and be able to compare like: foreach search1.day_of_week != search2.day_of_week Or somehow leverage the field's index to loop through the field  values in all events and compare.  My goal is to compare all instances of each field (and combinations of field values) to each other. Based on equality or inequality, I want to step through the data and make a final comparison if the prior conditions are satisfied. For example, I'd like to compare a particular event field 'distance' to 'distance' in all other events if the values in 'day_of_week' AND 'color' AND 'shape' are equal (or not equal). Is there any straightforward way of doing this without a lot of complex logic or explicitly defining a value to compare to? I don't want to define the values I am looking for in advance and would like to use the values returned in the search field to set the value to be compared. This feels like a simple task but I am not finding an elegant solution. I'd be truly grateful for some suggestions.    
I have created a metrics dashboard in which I have configured column chart. By default scale used is "Linear", this had issues as lower values were not plotted. As a fix configured the metrics to use... See more...
I have created a metrics dashboard in which I have configured column chart. By default scale used is "Linear", this had issues as lower values were not plotted. As a fix configured the metrics to use "Log" scale, which did meet requirements. But the issue now is that when the count value spans around 1,3,7,14 all these values are maxed at 1. It's weird because expectation is that the plotting would be 0,1,10,100 and the values should have been plotted accordingly. Not sharing query as this is purely based on the count value. Would someone know if this is expected behavior or is there some config that should be tweaked which i might have missed. Metrics dashboard config mentioned below if it helps.: <row> <panel> <title>Metrics Panel</title> <chart> <search> <query>index=abcx "logData.logType"=INFO | timechart span=$slice$ count(feedName) </query> <earliest>$time_range$</earliest> <latest>now</latest> </search> <option name="charting.chart">column</option> <option name="charting.axisTitleX.text">Timestamp</option> <option name="charting.axisY.scale">log</option> <option name="charting.chart.showDataLabels">all</option> </chart> </panel> </row>
I'll start by saying I may be doing this completely wrong. I need help removing the first 2 lines and the last 2 lines of a file via props and transforms. I have tried so far only works to remove... See more...
I'll start by saying I may be doing this completely wrong. I need help removing the first 2 lines and the last 2 lines of a file via props and transforms. I have tried so far only works to remove the first to lines (so all events process properly except the last on in the file b/c the last 2 lines end up mucking up the json for that event). I have a JSON file (sample content below); the file starts with "value" : [ with several hundred objects in the values in that array.   { "value": [ { "properties": { "roleName": "Virtual Machine Administrator", "type": "CustomRole", "description": "administer and update virtual machines.", "assignableScopes": [ "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx" ], "permissions": [ { "actions": [ "Microsoft.Storage/*/read", "Microsoft.Compute/virtualMachines/performMaintenance/action" ], "notActions": [] } ], "createdOn": "2018-11-01T20:32:29.71317Z", "updatedOn": "2018-11-01T20:32:29.71317Z", "createdBy": "af5e3f18-3a18-4141-8296-5efb1b267cd9", "updatedBy": "af5e3f18-3a18-4141-8296-5efb1b267cd9" }, "id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx/providers/Microsoft.Authorization/roleDefinitions/92e07475-99a8-4e12-9fc2-c4034be97904", "type": "Microsoft.Authorization/roleDefinitions", "name": "xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx" }, { "properties": { "roleName": "Virtual Machine Support", "type": "CustomRole", "description": "Can administer and update virtual machines.", "assignableScopes": [ "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx", "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx" ], "permissions": [ { "actions": [ "Microsoft.Storage/*/read", "Microsoft.Compute/disks/delete", "Microsoft.Compute/disks/write", "Microsoft.Compute/snapshots/write", "Microsoft.Compute/disks/beginGetAccess/action" ], "notActions": [] } ], "createdOn": "2018-11-28T02:09:47.2262816Z", "updatedOn": "2020-09-14T17:33:57.5619979Z", "createdBy": "xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx", "updatedBy": "xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx" }, "id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx/providers/Microsoft.Authorization/roleDefinitions/e74f813f-9dee-48f4-a0ba-ec37f07a95f9", "type": "Microsoft.Authorization/roleDefinitions", "name": "xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx" } ] }     All a care about is what is in the array (as individual events in splunk). So I'd like to is strip off (at the beginning)   { "value": [    and remove the following from the end.   ] }   If I do that then everything I have works perfectly for splunk. My current problem is that my props and transforms will remove { "value": [ from the beginning but I can't seem to remove the ] } from the end.   ## props.conf [mscs:azure:roledef] TRANSFORMS-timestamp=timestampeval TRANSFORMS-elimL1=eliminateL1, eliminateLE KV_MODE = json LINE_BREAKER = (?ms)[\r\n]+\s{4}}(,[\n\r]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TRANSFORMS-timestamp = timestampeval TRUNCATE = 0 category = Structured description = A variant of the JSON source type, with support for nonexistent timestamps disabled = false pulldown_type = true ## transforms.conf [timestampeval] INGEST_EVAL = _time=strptime(replace(source,".*(?=\\\)\\\\",""),"Role Definitions_%Y-%m-%dT%H %M %S") [eliminateL1] REGEX = (?ms)^(?:{.+"value":\s\[.) DEST_KEY = queue FORMAT = nullQueue [eliminateLE] REGEX = (?ms)(?:\s+]\s})$ DEST_KEY = queue FORMAT = nullQueue    
Hi   I have 3 queries as below and all 3 of them have a common field "loaderId". I used join to combine their results in a table format and calculated P95 on their times. But I believe there has to ... See more...
Hi   I have 3 queries as below and all 3 of them have a common field "loaderId". I used join to combine their results in a table format and calculated P95 on their times. But I believe there has to be a better approach to this problem. Please let me know one. Eg: index=* ... | search eventSource="Page Load" | |table eventSource, duration1, loaderId | join loaderId [search  index=* ... | search eventSource="End to End Time" | table eventSource, duration2, loaderId | join loaderId [ index=* ... | search eventSource="Total Time" table eventSource, duration3, loaderId] | table eventSource, duration1, duration2, duration3, loaderId | stats perc95(duration1), perc95(duration2), perc95(duration3)
I'm a new Splunk user and I'm trying to get a simplistic working data feed that I can build off of.  I tend to learn better that way. I'm trying to use the Ecobee API to poll my smart thermostat for... See more...
I'm a new Splunk user and I'm trying to get a simplistic working data feed that I can build off of.  I tend to learn better that way. I'm trying to use the Ecobee API to poll my smart thermostat for data to import and index in Splunk.  I've searched and found the Add-On Builder for Splunk which seems fitting for the job.  I'm having trouble successfully configuring the app. I can successfully run the following command from the Splunk server CLI:   curl -s -H 'Content-Type: text/json' -H 'Authorization: Bearer MY_ACCESS_TOKEN' 'https://api.ecobee.com/1/thermostat?format=json&body=\{"selection":\{"selectionType":"registered","selectionMatch":"","includeRuntime":true\}\}'   I'm having trouble running a successful test with what I believe are the same settings in the Add-On Builder data import wizard.  Below are the settings I'm using in attempt to mirror the command above:   REST URL: https://api.ecobee.com/1/thermostat?format=json&body=\{"selection":\{"selectionType":"registered","selectionMatch":"","includeRuntime":true\}\} REST method: GET REST request headers: Content-Type json Authorization Bearer MY_ACCESS_TOKEN   The output given is: The response status=403 I feel if I can get a basic query working, that I can step it out from there, but I can't figure out what I'm doing wrong.  Any suggestions?
  Hi team!   How can I optimize the following search? I want to find ips that have made an attack and have been blocked by the UTM but that have registered any allowed connection.   index=xxxx... See more...
  Hi team!   How can I optimize the following search? I want to find ips that have made an attack and have been blocked by the UTM but that have registered any allowed connection.   index=xxxx type=utm action=blocked | table srcip | join type=inner [search index=xxxx type=traffic action=allowed] | stats count by srcip   Thanks in advance!
Hi folks, Any idea how we can assign a default value for empty tag with KV_MODE=xml? The reason is, we have a xml segment repeated multiple times under same parent/grandparent, as such same tag nam... See more...
Hi folks, Any idea how we can assign a default value for empty tag with KV_MODE=xml? The reason is, we have a xml segment repeated multiple times under same parent/grandparent, as such same tag names (xpath) repeated multiple times. So one field for each tag name is extracted with a mv value. Now the issue is if some of the tag in the middle is empty, it will mess up the mv index in these fields. And we need the full xpath as the field name, so it is hard to do a manual generic field extraction ($1::$2) either.  A quick thought is if we can fill up a special value in the raw, then all mv fields will stay well aligned. But is this the only option? Any suggestion or better solution? Or can we do it at search time with sonething like " | rex field=ccnumber mode=sed ..."   Thanks a lot    
I would like to display the count of a string field on a dashboard. Not sure how to do this with or without writing it in a query. See below
suppose my logs have fields A=a1..aN, B=b1..bN, C=c1..cN and I see an increase in number of requests, i.e index=* | bucket _time span=1h | stats count by _time Is there a special query that I can ... See more...
suppose my logs have fields A=a1..aN, B=b1..bN, C=c1..cN and I see an increase in number of requests, i.e index=* | bucket _time span=1h | stats count by _time Is there a special query that I can execute which allows me to easily figure out which dimension / value is contributing to the increase most? e.g. the outcome would be A=a423 has increased the traffic the most
Hello All, I would like to list down the applications where users have never logged in. I have a input.csv file with the list of applications and I was able to write a search to list down the applic... See more...
Hello All, I would like to list down the applications where users have never logged in. I have a input.csv file with the list of applications and I was able to write a search to list down the applications by login count. Now I want to list the applications where users have logged in (Login Count=0). My query to list all the apps and with login count is as follows:  sourcetype="aduit" success NOT AUTHN_ATTEMPT | lookup app_lookup.csv Connection_ID as connectionid OUTPUT App_Name | stats count(App_Name) as "Number of Successful Logins" by App_Name | sort - "Number of Successful Logins"   How can I get list of apps with 0 logins? Thanks, Rakesh Venkata
How Can I add  a subnet or CIDR to ip intel  threat intelligence lookup?
We have updated our licensing policy! For on-premises license stacks of less than 100GB on Splunk Enterprise 8.1.0 and above, Splunk will disable search when license limits are violated after 45 warn... See more...
We have updated our licensing policy! For on-premises license stacks of less than 100GB on Splunk Enterprise 8.1.0 and above, Splunk will disable search when license limits are violated after 45 warnings within a 60-day rolling window. For more details, check out the Licensing Enforcement FAQ.
Dear Team, This is to inform you that i have around 50 universal forwarder installed in my splunk environment. How we can be aware that one of Universal forwarder has stopped sending the data to in... See more...
Dear Team, This is to inform you that i have around 50 universal forwarder installed in my splunk environment. How we can be aware that one of Universal forwarder has stopped sending the data to indexer?   Please help.   Regards, Zahab Zia
We have a set of non clustered indexers. around 20. There is a duplicate entry of an index definition in indexes.conf. So, splunk is identifying the latest entry and implementing it. If I delete the... See more...
We have a set of non clustered indexers. around 20. There is a duplicate entry of an index definition in indexes.conf. So, splunk is identifying the latest entry and implementing it. If I delete the entry which is not active, what impact will it have? Secondly, how is disc space allocated to indexes on indexer? For example, if disc space allocated to index is 80% full, and I decrease the disc space by 50%, or  FrozenTimePeriodinSecs is increased or decreased, is the current indexed data deleted 100%, and new space is allocated or are the new policies, applied to the already existing data and the data which should not be available as per the new Policy(i.e. FrozenTimePeriodinSecs and/OR homePath.maxDataSizeMB and/or coldPath.maxDataSizeMB values) rolled off? We are not archiving any data. So, after cold it is being deleted.    
  In splunk enterprise on premises solutions when the contracted license is exceeded, a warning appears But in splunk cloud I don't see any alerts when the contracted license is exceeded can someo... See more...
  In splunk enterprise on premises solutions when the contracted license is exceeded, a warning appears But in splunk cloud I don't see any alerts when the contracted license is exceeded can someone explain to me?
Hi,   the times splunk shows in "inspect job" are totally unrelated to reality:   This search has completed and has returned 11 results by scanning 18 events in 2.301 seconds   That's for a se... See more...
Hi,   the times splunk shows in "inspect job" are totally unrelated to reality:   This search has completed and has returned 11 results by scanning 18 events in 2.301 seconds   That's for a search which took 23 seconds to show up in the browser, which was also documented in "inspect job" by: 23.24 startup.handoff   The funny thing is, that splunk then can also show results after 3 seconds, but still have a startup.handoff of 20+ seconds. So it can be off by factor 10 in both directions, what's the use of that?   Regards Arnim