All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I am trying to run a basic search where I am trying to print table based on where and like() condition. But its not working. Following is a query. It is always showing 0 results. index="t... See more...
Hi all, I am trying to run a basic search where I am trying to print table based on where and like() condition. But its not working. Following is a query. It is always showing 0 results. index="traindetails" sourcetype=* | eval trainNumber="1114" | eval train2 = A_BCD_1114_EFG | where like(train2,"%$trainNumber$%") | table trainNumber,train2 I also tried following but no luck. | where like(train2,"%"+$trainNumber$+"%") can someone please help? Thanks
I have an add on for unix and linux downloaded on my monitored servers and the data is sent to my indexers. In the Unix:Service sourcetype the time that is wrriten is 3.5 hours delayed, meanwhile t... See more...
I have an add on for unix and linux downloaded on my monitored servers and the data is sent to my indexers. In the Unix:Service sourcetype the time that is wrriten is 3.5 hours delayed, meanwhile the time that is wrriten in the event itself is the correct time. Can someone help please and know how to fix it???
  Hi, For the MLTK app on Splunk, I need to change the number of distinct values for logistic regression, based on my data. However, when I try and configure these settings, the followi... See more...
  Hi, For the MLTK app on Splunk, I need to change the number of distinct values for logistic regression, based on my data. However, when I try and configure these settings, the following button is greyed out:     Why would this be and how can I resolve this please? Thanks,
I started mucking around with Splunk at home since I was going to be responsible for it at work and I kinda like it so I setup a single instance at the house to monitor my network traffic. Most thing... See more...
I started mucking around with Splunk at home since I was going to be responsible for it at work and I kinda like it so I setup a single instance at the house to monitor my network traffic. Most things are fine but for some reason for a couple of days, it went bonkers to the tune of >4GB! WOW. I get Splunk not wanting people to use it for free when they have really big - even lab -networks but I have like 5 or 6 vms and a couple of Pis. The issue is that I can't do any searches to see who is sending the data so that I can stop it. Is there a simple way to reset the number of exceeds so that I can troubleshoot what's sending all the data and turn it off?
Hello Splunkers / @DavidHourani  We have a single site indexer cluster with 2 Indexers which are having storage issues so we decided to apply below parameters. We are currently on Splunk version 8.... See more...
Hello Splunkers / @DavidHourani  We have a single site indexer cluster with 2 Indexers which are having storage issues so we decided to apply below parameters. We are currently on Splunk version 8.1.7 1. tsidxWritingLevel = 4 2. enableTsidxReduction = true 3. timePeriodInSecBeforeTsidxReduction = 7890000 The issue here is from cluster Master i can see RF/SF is not met and 1 of the IDX is in Automatic-detention mode, so in this scenario what challenges will i face if above parameters are enabled for all the existing indexes. Splunk docs doesn't tell much about RF/SF with these parameters.
Hi all, To give a problem background, I am trying to run a map command inside a search to get some values. THE JSON I am trying to access (sample below) has nested JSONs where I only need to read a... See more...
Hi all, To give a problem background, I am trying to run a map command inside a search to get some values. THE JSON I am trying to access (sample below) has nested JSONs where I only need to read and derive value for the matched block. But as of now, my table command prints 3 rows instead of one (one row for each nested  JSON). I would like to print only the matching JSON block and ignore the other. I think rex and spath will be required here but it was still printing 3 rows as the final output but I need to print only 1 row. Not sure how to use them correctly to get the results. Please help. my sample search: Index=Dummy X.id=AA11 | eval version=X.version | eval connTrain=X.conTrainId----(value is TR2) | map Index=ABC Y.TrainID=AA11 Y.version=$version$ Sample JSON is given below. In this case, I need to only access TR2 (second block) and print its time and passenger value. In real-time, there can be only 1 JSON block or many and matching block can be at any location in case of multiple blocks. { TrainID=AA11 "TrainData": [ { "ConnectingTrain": { "TR1": { "connectionTime": "59", "TotalPassengers": "44", }, "TR2": { "connectionTime": "33", "TotalPassengers": "47", }, "TR3": { "connectionTime": "51", "TotalPassengers": "27", } } } ] }
Hi, In place of count I want to show the server name, And change color based on condition if count is >250. I referred many links and docs but could not achieve what I wanted  index = webs... See more...
Hi, In place of count I want to show the server name, And change color based on condition if count is >250. I referred many links and docs but could not achieve what I wanted  index = webss sourcetype = webphesst earliest= -1d latest=now | where HTTP=500 | stats count by host | eval color=if(count>=250, "#dc4e41", "#65a637"), icon=if(count>=250, "times-circle", "check-circle")
Hi All,   How to get user details associated with 'aadUserId' field in logs from 'Microsoft Graph Security API Add-On for Splunk'. Does https://splunkbase.splunk.com/app/3757have to be installed ... See more...
Hi All,   How to get user details associated with 'aadUserId' field in logs from 'Microsoft Graph Security API Add-On for Splunk'. Does https://splunkbase.splunk.com/app/3757have to be installed to get the user name/user email ID?
Hello Guys,  I am getting confused about this below query,  can anyone help me to understand it. Actually in the search query there is "AND" commands with the same Field name, I am not getting to... See more...
Hello Guys,  I am getting confused about this below query,  can anyone help me to understand it. Actually in the search query there is "AND" commands with the same Field name, I am not getting to know how the "AND" command works here for same field. If its a "OR" command then the query will check for both the values, but where as coming to "AND" command how does it works in the same field name. Can someone help me out regarding this.... index=* source="WinEventLog:Microsoft-Windows-PowerShell/Operational" AND ((EventCode="800" AND EventData="*-ItemProperty*" AND EventData="*\\SYSTEM\\CurrentControlSet\\Control\\Lsa*" AND EventData="*DsrmAdminLogonBehavior*") OR (EventCode="4103" AND Payload="*-ItemProperty*" AND Payload="*\\SYSTEM\\CurrentControlSet\\Control\\Lsa*" AND Payload="*DsrmAdminLogonBehavior*") OR (EventCode="4104" AND ScriptBlockText="*-ItemProperty*" AND ScriptBlockText="*\\SYSTEM\\CurrentControlSet\\Control\\Lsa*" AND ScriptBlockText="*DsrmAdminLogonBehavior*")) Thanks in advance......
I have 1 table having current date when in maintenance and another is last date when it started and looking new value with difference of 2 date without Join.
Good day,   how to group results of a same filed value into one fileld value from below table i have a field box-name and in the multiple value of same  how can i group same value into one va... See more...
Good day,   how to group results of a same filed value into one fileld value from below table i have a field box-name and in the multiple value of same  how can i group same value into one value  as below table for same value in BOX_NAME field how can i keep as one value   i am using search to table the results    index=indexname sourcetype=sourename | eval Actualstarttime=strftime(strptime(NEXT_START,"%Y/%m/%d %H:%M:%S"),"%H:%M") | eval Job_start_by=strftime(strptime(LAST_START,"%Y/%m/%d %H:%M:%S"),"%H:%M") | table BOX_NAME,JOB_NAME,JOB_GROUP,REGION,TIMEZONE,STATUS,Currenttime,STATUS_TIME,LAST_START,LAST_END,NEXT_START,DAYS_OF_WEEK,EXCLUDE_CALENDAR,RUNTIME,Actualstarttime,Job_start_by,START_SLA,AVG_RUN_TIME   BOX_NAME JOB_NAME JOB_GROUP REGION TIMEZONE STATUS PNB-JAWS-USCA-ORDER-TCA-INBOUND-DAILY PNC-JAWS-USCA-ORDER-TCA-INBOUND-60ZIP JAWS   Central SUCCESS PNB-JAWS-USCA-ORDER-TCA-INBOUND-DAILY PNC-JAWS-USCA-ORDER-TCA-INBOUND-040INF JAWS   Central SUCCESS PNB-JAWS-USCA-ORDER-TCA-INBOUND-DAILY PNC-JAWS-USCA-ORDER-TCA-INBOUND-080DEL JAWS   Central SUCCESS PNB-JAWS-USCA-ORDER-TCA-INBOUND-DAILY PNC-JAWS-USCA-ORDER-TCA-INBOUND-010ARC JAWS   Central SUCCESS PNB-JAWS-USCA-ORDER-TCA-INBOUND-DAILY PNC-JAWS-USCA-ORDER-TCA-INBOUND-025FW JAWS   Central SUCCESS    
Hi Is it planned to release an option to use  "AWS roles anywhere" for the integration ? See link: https://docs.aws.amazon.com/rolesanywhere/latest/userguide/introduction.html
We have 100 hosts and for all these hosts we want to append a keyword to the host name. for example, hostnames are TEST1, TEST2 and TEST3 and we want to add a keyword called APP, so the final host's ... See more...
We have 100 hosts and for all these hosts we want to append a keyword to the host name. for example, hostnames are TEST1, TEST2 and TEST3 and we want to add a keyword called APP, so the final host's name will be like APPTEST1, APPTEST2 and APPTEST3. Can we do this at UF level? Note- we don't want to do this based on source and source type at HF level because of the default source and source types.
I need to create an alert when all the below queues are at 100% for respective indexer.  For this I am using "DMC Alert - Saturated Event-Processing Queues" inbuilt alert but need to tweak it a littl... See more...
I need to create an alert when all the below queues are at 100% for respective indexer.  For this I am using "DMC Alert - Saturated Event-Processing Queues" inbuilt alert but need to tweak it a little bit to alert when all the 4 queues " aggQueue.*"  "indexQueue.0*"  "parsingQueue.*" and "typingQueue.0" are at 100% for that host. Query -  | rest splunk_server_group=dmc_group_indexer /services/server/introspection/queues | search title=tcpin_queue* OR title=parsingQueue* OR title=aggQueue* OR title=typingQueue* OR title=indexQueue* | eval fifteen_min_fill_perc = round(value_cntr3_size_bytes_lookback / max_size_bytes * 100,2) | fields title fifteen_min_fill_perc splunk_server | where fifteen_min_fill_perc > 99 | rename splunk_server as Instance, title AS "Queue name", fifteen_min_fill_perc AS "Average queue fill percentage (last 15min)"   Output - Queue name Average queue fill percentage (last 15min) Instance aggQueue.0 99.98 x aggQueue.1 100.00 x aggQueue.2 99.99 x indexQueue.0 100.00 x indexQueue.1 99.98 x indexQueue.2 99.97 x parsingQueue.0 100.00 x parsingQueue.1 99.82 x parsingQueue.2 99.98 x typingQueue.0 99.96 x typingQueue.1 99.99 x typingQueue.2 99.96 x aggQueue.0 100.00 y aggQueue.1 100.00 y aggQueue.2 100.00 y indexQueue.0 100.00 y indexQueue.1 100.00 y indexQueue.2 100.00 y parsingQueue.0 100.00 y parsingQueue.1 100.00 y  
Good day, i am using search query to correlate one field belongs and related jobs for that field i am using below query using transaction but i am trying to get unique value for one field but val... See more...
Good day, i am using search query to correlate one field belongs and related jobs for that field i am using below query using transaction but i am trying to get unique value for one field but values are missing for other fields also. correct my query  as my output expecting is in the table name of the BOX_NAME with one unque value and respective JOB_NAME under BOX_NAME   index=indexname sourcetype=sourcetypename | eval Actualstarttime=strftime(strptime(NEXT_START,"%Y/%m/%d %H:%M:%S"),"%H:%M") | eval Job_start_by=strftime(strptime(LAST_START,"%Y/%m/%d %H:%M:%S"),"%H:%M") | transaction BOX_NAME | table BOX_NAME,JOB_NAME,JOB_GROUP,REGION,TIMEZONE,STATUS,Currenttime,STATUS_TIME,LAST_START,LAST_END,NEXT_START,DAYS_OF_WEEK,EXCLUDE_CALENDAR,RUNTIME,Actualstarttime,Job_start_by,START_SLA,AVG_RUN_TIME
Hi Friends. I'm using Splunk cloud 9.0. I have installed "Splunk add-on for Microsoft Cloud services" add-on in Heavy forwarder. I want to get data from Azure Event hub.  I have created Azure app... See more...
Hi Friends. I'm using Splunk cloud 9.0. I have installed "Splunk add-on for Microsoft Cloud services" add-on in Heavy forwarder. I want to get data from Azure Event hub.  I have created Azure app account. I have configured input details. but I'm not getting data to splunk. I'm getting below error message:  2022-12-26 10:26:32,874 level=WARNING pid=12124 tid=Thread-2 logger=azure.eventhub._eventprocessor.event_processor pos=event_processor.py:_load_balancing:286 | EventProcessor instance '35e1711c-18e6-480f-a203-ee6ec4070fc2' of eventhub 'eh-spyglass-metrics-aks-whsdapedge-eastus2-stg' consumer group 'splunk'. An error occurred while load-balancing and claiming ownership. The exception is AuthenticationError("Management authentication failed. Status code: 401, Description: 'Attempted to perform an unauthorized operation.'\nManagement authentication failed. Status code: 401, Description: 'Attempted to perform an unauthorized operation.'"). Retrying after 11.293821480572925 seconds   I have already raised ticket to add my heavy forwarder server IP to add in the whitelist of Azure event hub. Could you please assist on this. How to receive data from Azure event hub to splunk? Thanks in advance.  
Hi, I need genterate list of data by giving max and min range. But I can't find a command (function) doing that. I will set max = 50 and min = 10 for following example. I think there's two way to... See more...
Hi, I need genterate list of data by giving max and min range. But I can't find a command (function) doing that. I will set max = 50 and min = 10 for following example. I think there's two way to do it by giving different argument. 1. Set max = 50 ,  min = 10,  output amount(length) of data = 7. Then I will recevice the output like: [10,   16.66,   23.32,   29.98,   36.64,   43.3,   50] In this case, I won't need set an interval. I only need give how many outcome I want to receive. 2. Set max = 50 ,  min = 10,  interval = 8.1. Then I will recevice the output like: [10,   18.1,   26.2,   34.3,   42.4,   50.5] The last(max data) can received 50 or 50.5, all works for me. In this case, I won't need set how many data I want to receive. I only need give the interval. Both way is aim to receive a list of data. Personally, I prefer No.1 solution, it is closer my need. By the way, I hope the output data can be a list or mutivalue.
We are facing metric gap several times. we need to collect for specific time.
I am downloaded and installed Splunk enterprise at home without procuring a license. Is my below understanding correct?   I will be able to index 500MB data daily. As long as I stay under tha... See more...
I am downloaded and installed Splunk enterprise at home without procuring a license. Is my below understanding correct?   I will be able to index 500MB data daily. As long as I stay under that limit, I should be able to use splunk forever.
Hi Experts, Im unable to find modify the pie chart colors using dashboard studio.  I have tried to add field colors under options in dashboard studio. Unable to edit for specific visualisation in ... See more...
Hi Experts, Im unable to find modify the pie chart colors using dashboard studio.  I have tried to add field colors under options in dashboard studio. Unable to edit for specific visualisation in the source code field.   Have a field called "supp_type", I was looking for pie chart to be green for value "current", Amber for "previous" and red for "old".  When I include the charting.fieldcolors it doesn't accept  or doesn't allow to save the panel code. Can you help me in adding that custom colors into dashboard studio.   Query: ------------- index=lab host=hmclab | spath path=hmc_info{} output=LIST | mvexpand LIST | spath input=LIST | where category == "hmc" | search hmc_version=V* OR hmc_version=unknown | dedup hmc_name | eval supp_type=case(match(hmc_version,"^V10.*|^V9R2.*"), "current", match(hmc_version, "^V9R1.*"), "previous", match(hmc_version, "^V8.*|^V7.*"), "old") | chart count by supp_type useother=false Source Code from dashboard studio: ------------------------------------------------------ { "type": "viz.pie", "dataSources": { "primary": "ds_RxEsq1cK" }, "title": "HMC Versions", "options": { "chart.showPercent": true, "backgroundColor": "transparent", "charting.fieldColors": {"current":0x008000, "previous":0xffff00, "old":0xff0000} }, "context": {}, "showProgressBar": false, "showLastUpdated": false }