All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Can someone please help me create a Splunk Table from the below data:   { "cd":[ { "cn":"cust-1", "s":"api", "ps":61, "fs":94, "es":142, ... See more...
Hi, Can someone please help me create a Splunk Table from the below data:   { "cd":[ { "cn":"cust-1", "s":"api", "ps":61, "fs":94, "es":142, "ud":[ { "un":"user-0", "ps":61, "fs":94, "es":142, "ad":[ { "at":"asset2", "ps":61, "fs":94, "es":142, "ttd":[ { "tt":null, "ps":61, "fs":94, "es":142 } ] } ] } ] }, { "cn":"cust-2", "s":"api", "ps":727, "fs":152, "es":26, "ud":[ { "un":"user-6", "ps":725, "fs":149, "es":21, "ad":[ { "at":"asset1", "ps":722, "fs":149, "es":20, "ttd":[ { "tt":null, "ps":722, "fs":149, "es":20 } ] }, { "at":"asset2", "ps":3, "fs":0, "es":1, "ttd":[ { "tt":null, "ps":3, "fs":0, "es":1 } ] } ] }, { "un":"user1", "ps":1, "fs":0, "es":0, "ad":[ { "at":"asset1", "ps":1, "fs":0, "es":0, "ttd":[ { "tt":null, "ps":1, "fs":0, "es":0 } ] } ] }, { "un":"user2", "ps":1, "fs":3, "es":5, "ad":[ { "at":"asset2", "ps":1, "fs":3, "es":5, "ttd":[ { "tt":null, "ps":1, "fs":3, "es":5 } ] } ] } ] }, { "cn":"cust-3", "s":"api", "ps":0, "fs":1, "es":0, "ud":[ { "un":"user-3", "ps":0, "fs":1, "es":0, "ad":[ { "at":"asset2", "ps":0, "fs":1, "es":0, "ttd":[ { "tt":null, "ps":0, "fs":1, "es":0 } ] } ] } ] }, { "cn":"cust-4", "s":"api", "ps":1, "fs":4, "es":22, "ud":[ { "un":"user-4", "ps":1, "fs":4, "es":22, "ad":[ { "at":"asset1", "ps":1, "fs":4, "es":22, "ttd":[ { "tt":null, "ps":1, "fs":4, "es":22 } ] } ] } ] } ] }     I want the output to be like: cn un at tt ps fs es cust-1 user-0 asset2 null 61 94 142 cust-2 user-6   user1 user2 asset1 asset2 asset1 asset2 null null null null 722 3 1 1 149 0 0 3 20 1 0 5 cust-3 user-3 asset2 null 0 1 0 cust4 user-4 asset1 null 1 4 22   Thanks in advance.
Please I need help with my splunk query below. My query below is only showing just one security metric based on my comparison. I have about 160 security metrics total but only seeing just one. Please... See more...
Please I need help with my splunk query below. My query below is only showing just one security metric based on my comparison. I have about 160 security metrics total but only seeing just one. Please I need your help with a splunk query that will show all the 160 security metrics based on my comparison below:   index=security source=base_ad_metric_test_v3 earliest=-1y base_ad_metric>0 | stats avg(base_ad_metric) AS avg stdev(base_ad_metric) AS stdev min(base_ad_metric) AS min max(base_ad_metric) AS max latest(base_ad_metric) AS latest_count BY Metric_ID | eval min_thres=5000, max_thres=7500 | eval is_above_thres=if(latest_count>max_thres, 1, 0) | eval is_below_thres=if(latest_count<min_thres, 1, 0) | eval data_item_volatility=case(is_above_thres==1, "High", is_below_thres==1, "Low", true(), "normal") | lookup free_metrics.csv Metric_ID output Data_Item_volatility AS spreadsheet_Data_Item_volatility Operating_System_Metric_Calculation AS spreadsheet_Operating_System_Metric_Calculation Metric_Name AS spreadsheet_Metric_Name Please see below for the result of the security metric and explanation. The result is showing just one security metric right now. Our data source is from Active directory through AD managewise and is fed into splunk. The csv file is coming from external source in excel spreadsheet. The problem is that the excel spreadsheet has about 150 security metrics and my result is only showing just one metric. I need help with splunk query that will go through all the 150 metrics and do the comparison as seen from the above query and give me the result. Please see the result below:   Metric ID avg stdev min max latest_count data_item_volatility is_above_thres is_below_thres 1 6145.32 330.95 4854 7319 5934 normal 0 0 max_thres min_thres spreadsheet_data_item_volatility spreadsheet_Metric_Name 7500 5000 Low Total # of GFE hardware assets spreadsheet_Operating_System_Metric_Calculation 5855
As you can see, top reports most values in megabytes, but a few are in kilobytes. I am working on a timechart that will show memory consumption in gigbytes over a day's time. I need to convert ... See more...
As you can see, top reports most values in megabytes, but a few are in kilobytes. I am working on a timechart that will show memory consumption in gigbytes over a day's time. I need to convert all of the VIRT values containing the letter "K" to megabytes, leave the VIRT values containing the letter "M" untouched, before finally converting all of the VIRT values into gigabytes. This is my query so far: index=os_evt_prd sourcetype=top host=kdcprd-db00 | rename COMMAND as Program | regex Program="(\w+[[:lower:]])" | replace *oracle WITH oracle | convert rmunit(VIRT) | eval "Used GB Mem."=floor(VIRT*0.0009765625) | timechart avg("Used GB Mem.") BY Program However, the values don't appear to be calculated properly due to the existence of VIRT values in both MB & KB. How can I get around this?    
I am searching for queries that are running over a certain amount of time and displaying start/end time and query in a table. For some rows, the query is null but when I look at the event, the field... See more...
I am searching for queries that are running over a certain amount of time and displaying start/end time and query in a table. For some rows, the query is null but when I look at the event, the field has a value. The value is very long though. Is there a limit on the number of characters that can be displayed in a field on a table? I have tried using substring to bring back only a few characters. This will work for the values already being displayed but the value I want to see is still null.   TIA
Hi, We are using Splunk Stream to get DNS logs into Splunk and it maps seamlessly with the Network Resolution Data model as well. To get cleaner data, I created another DNs stream but this time with... See more...
Hi, We are using Splunk Stream to get DNS logs into Splunk and it maps seamlessly with the Network Resolution Data model as well. To get cleaner data, I created another DNs stream but this time with some aggregation as shown below. Intent is to get concise summaries of DNS events without overwhelming the Indexers.   Now instead of field "query",  data is  coming  as field values(query) which no longer maps to Network Resolution Data Model. DNS.query now reads unknown. Has anyone come across this issue? Would renaming the field resolve the issue or is there a better way to fix the mapping? Thanks, ~ Abhi    
Hi all, I recently had our MS Teams Alert Action addon upgraded from 1.0.11 to 1.0.18, and have been seeing errors with older alerts due to a missing parameter, this one: action.ms_teams_publish_to... See more...
Hi all, I recently had our MS Teams Alert Action addon upgraded from 1.0.11 to 1.0.18, and have been seeing errors with older alerts due to a missing parameter, this one: action.ms_teams_publish_to_channel.param.alert_ms_teams_fields_order What indicates this missing parameter failure is this error message in the MS Teams App → Logging Reports → MS Teams - modular action failures  menu that contains this: signature="Error: 'in <string>' requires string as left operand, not NoneType. Please double check spelling and also verify that a compatible version of Splunk_SA_CIM is installed." action_name="ms_teams_publish_to_channel"   The fix appears to be going in to the MS Teams action through the alert interface, and toggling the field ordering dropdown, this adds the missing parameter.   Is there a way to review all the existing alerts in my Splunk Cloud instance, and check for a missing parameter?   Thanks in advance, Alex.  
Hello, I have the following log   19:02:32.576 [hz.cache-server-2.HealthMonitor] INFO com.hazelcast.int.dia.HM - [X.X.X.X]:5701 [X.X.X.X] [3.11.1] processors=12, physical.memory.total=98.2G, physic... See more...
Hello, I have the following log   19:02:32.576 [hz.cache-server-2.HealthMonitor] INFO com.hazelcast.int.dia.HM - [X.X.X.X]:5701 [X.X.X.X] [3.11.1] processors=12, physical.memory.total=98.2G, physical.memory.free=12.0G, swap.space.total=0, swap.space.free=0, heap.memory.used=4.3G, heap.memory.free=901.8M, heap.memory.total=5.2G, heap.memory.max=5.2G, heap.memory.used/total=82.96%   And I am looking to extract the latest heap.memory.used/total value for the last 24 hours and display it in radial gauge   Currently I am using the following  index=* "heap.memory.used/total" | rex "heap.memory.used/total=(?<Number>\d+)" | stats latest(Number) as current_cpu_usage | gauge 0,20,40,60,80,100   I expected radial gauge to show the current mem usage i.e. 82.96%, but the speedo is stuttering at 0.   I wonder my extraction of the % field is incorrect. Could someone assist me  here and also how would I display the mem usage percentage in the dial please?   Best Regards,
I will be adding CPU cores to my search head (virtual). After I have added the cores, do I need to manually update limits.conf? Currently at 8 cores, going to 12 or 16. limilts.conf is currently usin... See more...
I will be adding CPU cores to my search head (virtual). After I have added the cores, do I need to manually update limits.conf? Currently at 8 cores, going to 12 or 16. limilts.conf is currently using defaults. # The base number of concurrent searches. base_max_searches = 6 # Max real-time searches = max_rt_search_multiplier x max historical searches. max_rt_search_multiplier = 1 # The maximum number of concurrent searches per CPU. max_searches_per_cpu = 1
Hi Team, I'm unable to login the splunk account, Could you please help me to get it fix.   Thanks Ravi Sangote
Hello i am using the following search  host=XXX sourcetype=ZZZ http_status=500 OR http_status=502 "HighCostAPI" | stats count by http_status, _time, pzInsKey | fields http_status _time pzInsKey co... See more...
Hello i am using the following search  host=XXX sourcetype=ZZZ http_status=500 OR http_status=502 "HighCostAPI" | stats count by http_status, _time, pzInsKey | fields http_status _time pzInsKey count | addcoltotals count I get the following results  Which is what we wanted originally, now the customer would like to have a count for the 500 errors and the 502 errors separately. I guess it wouldn't be necessary to split the http_status into 2 columns just as long as i can have a count for both.   
Hi Team, I have query like following : index=something  earliest=-7d latest=now() | stats count by page where i need to calculate how many hits page is getting in last 1 hour, 4 hours, 24 hou... See more...
Hi Team, I have query like following : index=something  earliest=-7d latest=now() | stats count by page where i need to calculate how many hits page is getting in last 1 hour, 4 hours, 24 hours and last weeks how do i join these search to draw one line graph from it.
Hello guys, we use splunk enterprise in our backend, however we need to collect the events that occur on the client of a mobile application. Is there a specific product from the splunk family that ... See more...
Hello guys, we use splunk enterprise in our backend, however we need to collect the events that occur on the client of a mobile application. Is there a specific product from the splunk family that does this? ps: sry about my english, its very poor. tks for all
Hello Community,  I am looking to Plot a line chart to compare against 2 time ranges by a specific field. This is for JIRA metrics. For example I have 3 fields Created_time  2020-10-29T03:4... See more...
Hello Community,  I am looking to Plot a line chart to compare against 2 time ranges by a specific field. This is for JIRA metrics. For example I have 3 fields Created_time  2020-10-29T03:49:09.000-0700 resolved_time 2021-01-06T15:26:23.000-0800 Ticket: ABC123 Show a line chart with 2 legends created_time, resolved_time by ticket.  This will help me compare against when a ticket opened vs closed.
I have started seeing this message often on my Indexer Cluster Master, when I view the Bucket Status page.  bid=_internal~7404~4D6B6D21-6F08-44EA-B793-XXXXXXXXXXXX removed from peer=4D6B6D21-6F08-44... See more...
I have started seeing this message often on my Indexer Cluster Master, when I view the Bucket Status page.  bid=_internal~7404~4D6B6D21-6F08-44EA-B793-XXXXXXXXXXXX removed from peer=4D6B6D21-6F08-44EA-B793-XXXXXXXXXXXX, frozen=1 The status for these buckets is-- Cannot fix search count as the bucket hasn't rolled yet.  I try to roll them but nothing happens.  Does frozen=1 mean it is old data and since we don't have a frozen volume then I can delete these buckets?  Any other suggestions on how to address this?  
I want to exclude this event "values='{CARD}hfgjllanabbflvh=='} from the server. Please suggest me regex for this and suggest me the props and transforms for this. Below the sample event: ' weblogi... See more...
I want to exclude this event "values='{CARD}hfgjllanabbflvh=='} from the server. Please suggest me regex for this and suggest me the props and transforms for this. Below the sample event: ' weblogic.server values='{CARD}hfgjllanabbflvh=='} [Sourcetype] SHOULD_LINEMERGE = false TRANSFORMS-mask = one   [one] REGEX = FORMAT =  DEST_KEY = 
Hi,one of my user complained that dashboard data is coming as zero  suddenly from last 3-4 days( earlier he was) where as his collegue is able to see them .. both are having same access and permissi... See more...
Hi,one of my user complained that dashboard data is coming as zero  suddenly from last 3-4 days( earlier he was) where as his collegue is able to see them .. both are having same access and permission ( same Security grp as well).. any thoughts how to resolve ?  
Happy New Year to all. I am trying to resolve some of the issues I'm coming across with estreamer/encore add-on 4.09. After installing, I've noticed some issues. The main one is the app is causing th... See more...
Happy New Year to all. I am trying to resolve some of the issues I'm coming across with estreamer/encore add-on 4.09. After installing, I've noticed some issues. The main one is the app is causing the knowledge bundle ($SPLUNK\var\run\*.bundle) to fill up (19GB). My current config is set to the default ~2GB. Eventually, the estreamer will stop and I'll stop receiving updates. This didn't happen in the previous versions. I did try to blacklist the data folder in the encore add-on app to not allow it to be bundled, but no luck.  Anyone having similar issues? 
Hi,  I have some dashboard that all use the similar color visualizations for some single value panels. Currently they are set in the init of each dashboard:   <init> <set token="rangeColors">["... See more...
Hi,  I have some dashboard that all use the similar color visualizations for some single value panels. Currently they are set in the init of each dashboard:   <init> <set token="rangeColors">["0xD93F3C","0xFF8000","0x53a051"]</set> <set token="rangeValues">[90,99]</set> </init>    In each panel that using them the options are set like this:   <option name="rangeColors">$rangeColors$</option> <option name="rangeValues">$rangeValues$</option>   However, it is used in quite a few dashboards and the rangeValues need to be updated regularly because of changing targets. So my idea is to create macros and set the token values using the macro, something like this:   <set token="rangeColors">`rangeColors`</set>   or <eval token="rangeColors">`rangeColors`</eval> Both don't seem to work, "set" just gives back a string: "`rangeColors`" and eval doesn't set a token at all. Can someone help me with making this work? Many thanks,  Yolan
I am trying to upgrade to the latest version of this app but the inputs setup page is Loading forever. Microsoft Azure Add on for Splunk version 3.0.1 on a UX Heavy Forwarder. https://splunkbase.sp... See more...
I am trying to upgrade to the latest version of this app but the inputs setup page is Loading forever. Microsoft Azure Add on for Splunk version 3.0.1 on a UX Heavy Forwarder. https://splunkbase.splunk.com/app/3757/ I tried a local test machine and there is was working so I added the inputs and copied the inputs.conf to my production Heavy Forwarder /local folder.  Still no data coming in and the inputs page is still freezing when I try to access it.