All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi, I have two KV_Store lookups as they are huge: * one is more than 250k rows * second and 65k rows.  In "250k" row lookup is only IP while in second one are IP CIDR+LIST So,  I do search li... See more...
hi, I have two KV_Store lookups as they are huge: * one is more than 250k rows * second and 65k rows.  In "250k" row lookup is only IP while in second one are IP CIDR+LIST So,  I do search like     | inputlookup list_250k | rename ip_cidr as ip | eval convert_ip=tostring(ip) | lookup list_65k ip_cidr AS convert_ip OUTPUT ip_cidr, list | where isNotNull(ip_cidr) | rename ip_cidr as found_in      I am getting results. I am curious are there any limits? if for example search is limited, would I see some error  (as there is no progress bar that it's working something)?
I am successfully using some simple LDAPSEARCH + LDAPGROUP searches to produce membership lists for various AD groups. Ex.     | ldapsearch search="(&(objectClass=Group)(cn=Remote Desktop Use... See more...
I am successfully using some simple LDAPSEARCH + LDAPGROUP searches to produce membership lists for various AD groups. Ex.     | ldapsearch search="(&(objectClass=Group)(cn=Remote Desktop Users)" | table cn,distinguishedName | ldapgroup | table cn,member_name,member_type     The searches work, but always show me all possible membership types (DIRECT, NESTED, etc.) I would like to filter my end results so that only DIRECT members are returned. I've tried inserting a | WHERE clause after the LDAPGROUP or final table command, but it just returns an empty set. I'm clearly not quite understanding how to pipeline output correctly.  Any help is much appreciated.   
Hi Splunkers, I'm trying to send one Splunk Cloud instance alerts to a Cortex XSOAR one. The guide I'm following is the following one: Splunk py for NON ES users. Due we have not Enterprise Security... See more...
Hi Splunkers, I'm trying to send one Splunk Cloud instance alerts to a Cortex XSOAR one. The guide I'm following is the following one: Splunk py for NON ES users. Due we have not Enterprise Security, I must follow steps described in section Splunk non-Enterprise Security Users.  Following this guide, what makes me a bit confused is step 4; they states only to create a macro to capture fields saved on a local file, but no indication on how to achieve this. I mean: I know I can put in macro code I want to reuse in more alerts, for example reuse the same where conditions in multiple different rules, but how should be the syntax to achieve what the guide states? For example, if following step 2 and 3 I got the following field list: list(field) Blade, ReachedPorts, count, dst_ip, earliest_timestamp, latest_timestamp, src_ip, src_user, total_dst_port, what code should I put on macro to "capture" them? Should I simply put fields list in macro code?  
Hello Splunkers, I would like to know if it's possible to prevent /lib path of a Splunk app to be overridden after an upgrade of this app. For instance, I edited the cacert.pem file in the follow... See more...
Hello Splunkers, I would like to know if it's possible to prevent /lib path of a Splunk app to be overridden after an upgrade of this app. For instance, I edited the cacert.pem file in the following path /opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/certifi/cacert.pem  And after the upgrade of the Splunk_TA_microsoft-cloudservices app, the cacert.pem has been replaced during the upgrade. Would it be possible to avoid that ? Regards, GaetanVP  
Hello everyone! I have a bit of a challenge I have run into. Long story short, we are required to show our customer that we are completing audits each time period. I saw another sites dashboard rec... See more...
Hello everyone! I have a bit of a challenge I have run into. Long story short, we are required to show our customer that we are completing audits each time period. I saw another sites dashboard recently, and am trying to replicate it, only issue is they didn't do it themselves so they don't know how its done.   I am going to explain it as best as I can. Basically, when they open a dashboard, a table populates with a comment, recorded by, and recorded on field.  The have the option to add a record to it, a comment box pops up, and they add whatever the comment is, and then click save.  The table then displays their comment, what their user name is, and a time stamp.  It saves week after week, and has a search records bar to find specific events as well.   Does anyone have any idea how to create something like this?    
I need to create a query where I need to populate a column's values from a lookup table as a default value.  Basically, replace all All in the below value with the values of the column.  index = "X... See more...
I need to create a query where I need to populate a column's values from a lookup table as a default value.  Basically, replace all All in the below value with the values of the column.  index = "XXX"  log_group IN("All") | stats sum(event) Thanks 
Hi everyone We are currently facing an issue using a load balancer with a search head cluster. This is an Elastic Load Balancer in AWS and we can't use the Load Balancer cookie, has to be an applic... See more...
Hi everyone We are currently facing an issue using a load balancer with a search head cluster. This is an Elastic Load Balancer in AWS and we can't use the Load Balancer cookie, has to be an application cookie. I have seen this issue before and seen people fix it by using the cookie session_id_PORTNUMBER, but our deployment doesn't need the user to connect to splunk web, it' just a service connecting through the ELB to the rest Api in port 8090(Had to change from default 8089). I have also tried splunkd_443 to no avail. The problem is that the ELB is throwing the petitions for login and for getjobs to different servers. Is there any other application cookie I could use? This is for splunk 9.0.4 Thanks everyone!
Hello Team, I presently have a single numeric value that is published to AppD every day, below is a sample of this extract: name=Custom Metrics|GRA|Pipeline View|WinSec|Total Input, value=7366, ag... See more...
Hello Team, I presently have a single numeric value that is published to AppD every day, below is a sample of this extract: name=Custom Metrics|GRA|Pipeline View|WinSec|Total Input, value=7366, aggregator=AVERAGE, time-rollup=AVERAGE, cluster-rollup=INDIVIDUAL Over the course of a month we will have a curve with 30x data points, as depicted by the screenshot below. Below are the requirements that we are trying to achieve, but I do not know where to start with the design and implementation, and would really appreciate some tips: ---------------------------------------------------------------------- Variance Warning Alert The warning alert will be based on a comparison of todays metric versus the metric from the previous day or week, based on the below rules Resource: Daily cadence report located below. Rules: Sat total is compared to prev Sat total Sun total is compared to prev Sun total Mon total is compared to prev Mon total Tues total is compared to Mon (y'day) total Wed total is compared to Tues (y'day) total Thur total is compared to Wed (y'day) total Fri total is compared to Thurs (y'day) total Calculation (Day to day variation): allowableWarningMarginValue = todays metric * 0.5 allowableCriticalMarginValue = todays metric * 0.8 ingestVariation = Abs(todays total - comparison day total) If (ingestVariation >= allowableWarningMarginValue and ingestVariation < allowableCriticalMarginValue) then   set heatmap value = AMBER   generate a warning message Else If (ingestVariation >= allowableCriticalMarginValue) then   set heatmap value = RED   generate a critical error message Else set heatmap value = GREEN Endif Example (Using Friday June 23 WinSec pipeline): Todays Total (Fri June 23) = 7366 Comparison Total (Thurs June 22) = 300 allowableWarningMarginValue = 7366 * 0.5 = 3683 allowableCriticalMarginValue = 7366 * 0.8 = 5892 ingestVariation = abs(7366 - 300) = 7066 if ingestVariation (7066) > allowableCriticalMarginValue (5892)   set heatmap value = RED   generate a critical message else if ingestVariation (7066) > allowableWarningMarginValue (3683)   set heatmap value = AMBER   generate a warning message else set heatmap value = GREEN endif AppD Result: For a given day I would like to simply output a single heatmap colour. The below screenshot illustrates how this heatmap colour would look over successive days, but each daily heatmap colour is simply overridden each day. Thank you in advance for any help you can provide. Kind regards, Tony.
I have a data like below Servename     downtime Web1               7 day 2 h Web2                2 h 23 min Web2               20 min Web3.               17 min   Have  severname and downtime ... See more...
I have a data like below Servename     downtime Web1               7 day 2 h Web2                2 h 23 min Web2               20 min Web3.               17 min   Have  severname and downtime fields. Servernames can be same with different donwtime available.   How can I group this by servename by adding the downtimes in to single row of per server. Help us with spl query
Hi All, We are trying to create a report of how many times a dashboard is being accessed in last 24 hrs in splunk. Can you please help us create a query to achieve this report. Your kind inputs are... See more...
Hi All, We are trying to create a report of how many times a dashboard is being accessed in last 24 hrs in splunk. Can you please help us create a query to achieve this report. Your kind inputs are highly appreciated..!!   Thank you..!!
Can we create a customized landing page for an existing Splunk application with some background color/image? And traverse to the dashboard present in that app with the help of some dropdown or butto... See more...
Can we create a customized landing page for an existing Splunk application with some background color/image? And traverse to the dashboard present in that app with the help of some dropdown or button? If yes, how can we do it in classic Splunk.  Is it possible to do this with the help of dashboard studio?
Dear splunk developer, As the development team behind the app available at https://splunkbase.splunk.com/app/6962, we would like to address an issue we encountered during the publishing process. Unfo... See more...
Dear splunk developer, As the development team behind the app available at https://splunkbase.splunk.com/app/6962, we would like to address an issue we encountered during the publishing process. Unfortunately, we mistakenly published our app under the individual developer account instead of our enterprise account. We would like to inquire if it is possible to rectify this by changing the account associated with our app to our enterprise account, and have it displayed as being built by QAX.Inc. We are willing to provide any necessary information to facilitate this change. However, if changing the account is not feasible, we kindly request guidance on how to delete the current app so that we can create a new one using a different company account. We greatly appreciate your assistance in resolving this matter. Thank you. Best wishes,
Hi Team, I am unable to access the GUI after installation is done on the Splunk PROD server. I tried with IP Address, FQDN, Hostname but not working. It's getting worked with the TEST Servers but... See more...
Hi Team, I am unable to access the GUI after installation is done on the Splunk PROD server. I tried with IP Address, FQDN, Hostname but not working. It's getting worked with the TEST Servers but not with PROD servers.   Getting this error when I tried to access the GUI.  
I tried to determine the size of my indexes in preparation for a Splunk Cloud Migration. I figured I could use the "eventcount" command to summarise the used index storage: | eventcount summar... See more...
I tried to determine the size of my indexes in preparation for a Splunk Cloud Migration. I figured I could use the "eventcount" command to summarise the used index storage: | eventcount summarize=true index=* report_size=true | eval sizeGB=round((size_bytes/1024/1024/1024),2) | eval ddasUnits=(sizeGB/500)    Then I let this command also run in one of my existing Splunk Cloud stack and expected the number to roughly match what the SPlunk Cloud Monitoring console would display for DDAS storage usage. But the numbers differ wildly. Eventcount reports something like 323 TB and DDAS usage reports 590 TB.  Wh
Hi,  I am trying to redirect logs only for a specified index of mine to 3rd party. But The target destination is receiving all possible logs.  My input.conf contains several index for testing p... See more...
Hi,  I am trying to redirect logs only for a specified index of mine to 3rd party. But The target destination is receiving all possible logs.  My input.conf contains several index for testing purposed.    inputs.conf [udp://514] sourcetype = syslog index = custom_index [tcp://127.0.0.1:514] sourcetype = syslog index = main [tcp://127.0.0.1:515] sourcetype = tcp_logs index = tcp_custom   my props:    [tcp_logs] // source type stanze from what I've found in the documentaiton. TRANSFORMS-indexAndForward = forward_to_third_party   transforms.conf   [forward_to_third_party] REGEX = .* DEST_KEY = _TCP_ROUTING FORMAT = my_third_party_destination   and output.conf   [tcpout] indexAndForward = true defaultGroup = my_third_party_destination [tcpout:my_third_party_destination] server = 0.0.0.0:9000 sendCookedData = false   could you please advice?  Thanks 
Hi All  I'd like some help please with a query thats been asked of me and its a little out of my depth  the current below query shows year total of helpdesk calls by year    index=mmuh_helpde... See more...
Hi All  I'd like some help please with a query thats been asked of me and its a little out of my depth  the current below query shows year total of helpdesk calls by year    index=mmuh_helpdesk sourcetype=mmuh_helpdesk_json | dedup id | fillnull value=NULL | search "problemtype.detailDisplayName"!=*AGRESSO* | eval problem_detail='problemtype.detailDisplayName' | eval problem_detail=replace(problem_detail, "&#8226","") | eval problem_detail=replace(problem_detail, ";","|") | eval techGroupLevel = 'techGroupLevel.levelName' | eval techGroupLevel = replace(techGroupLevel, "&nbsp;"," ") | eval techGroupLevel = replace(techGroupLevel, " ","") | eval techGroupLevel = replace(techGroupLevel, "Level"," Level") | eval location_Name = 'location.locationName' | eval status = 'statustype.statusTypeName' | eval priority = 'prioritytype.priorityTypeName' | eval techGroupId = 'techGroupLevel.id' | eval tech_Name = 'clientTech.displayName' | top limit=20 date_year | search date_year IN(2020,2021,2022,2023) | eval sort=case(date_year=="2020", "02", date_year=="2021","03", date_year=="2022","04", date_year=="2023","05") | sort sort | fields - sort | fields - percent | eval Year=case(date_year=="2020", "Y 2020", date_year=="2021", "Y 2021", date_year=="2022", "Y 2022", date_year=="2023", "Y 2023") | table Year count i would now like to show years only 2021 and 2022 in line chart  but ... i would like each year to have a separate line  if possible i'd like the time span for each year to be broken into a month  i can get this with the time picker (previous year chosen) and following query below for 2022 index=mmuh_helpdesk sourcetype=mmuh_helpdesk_json | dedup id | fillnull value=NULL | search "problemtype.detailDisplayName"!=*AGRESSO* | eval problem_detail='problemtype.detailDisplayName' | eval problem_detail=replace(problem_detail, "&#8226","") | eval problem_detail=replace(problem_detail, ";","|") | eval techGroupLevel = 'techGroupLevel.levelName' | eval techGroupLevel = replace(techGroupLevel, "&nbsp;"," ") | eval techGroupLevel = replace(techGroupLevel, " ","") | eval techGroupLevel = replace(techGroupLevel, "Level"," Level") | eval location_Name = 'location.locationName' | eval status = 'statustype.statusTypeName' | eval priority = 'prioritytype.priorityTypeName' | eval techGroupId = 'techGroupLevel.id' | eval tech_Name = 'clientTech.displayName' | timechart span=1mon count(id)    how can i get both years show ? thank you       
I am running this in Splunk ES (Enterprise Security). My objective is to find out those savedsearch_name whose average run time is greater than 300. Need some assistance in crafting the logic. Her... See more...
I am running this in Splunk ES (Enterprise Security). My objective is to find out those savedsearch_name whose average run time is greater than 300. Need some assistance in crafting the logic. Here is my SPL so far.... index=_internal sourcetype=scheduler savedsearch_name=* `WF2020_1797_MA_active_es_shd` ACCELERATE run_time>1 | rex field=savedsearch_name "_ACCELERATE_DM_(?<savedsearch_name>.*?)_ACCELERATE_" | search savedsearch_name!="Splunk_SA_CIM_Splunk_Audit.Modular_Actions" savedsearch_name!="Splunk_SA_CIM_Splunk_Audit.View_Activity" | timechart span=5m avg(run_time) AS avg_run_time by savedsearch_name useother=f limit=0
I have data like this,   1st course 2cd course 3rd course 4th course A 3 4 2 2 B 2 5 1 3   I would like this data to look like this using a line graph, I'm trying to u... See more...
I have data like this,   1st course 2cd course 3rd course 4th course A 3 4 2 2 B 2 5 1 3   I would like this data to look like this using a line graph, I'm trying to use this code, but the when i use multiple "over" data, an error occurs. chart avg(x) as A over "1st course", "2cd course" How Could I make this line graph?   
is there any function available in splunk which converts the data in string format to json, which is actually json data but missing quotations.   {uiType:Picker,model:{type:field,name:options_f... See more...
is there any function available in splunk which converts the data in string format to json, which is actually json data but missing quotations.   {uiType:Picker,model:{type:field,name:options_for_teams},plainTextMessage:Please select an option from the below list.,uiMetadata:{label:Please select an option from the below list.,itemType:ID,style:list,options:[{label:Create Service Request,value:Create Service Request,renderStyle:data,enabled:false},{label:Report an Incident,value:Report an Incident,renderStyle:data,enabled:false},{label:Check Incident/Request Status,value:Check Incident/Request Status,renderStyle:data,enabled:false},{label:Self-Service Knowledge Article,value:Self-Service Knowledge Article,renderStyle:data,enabled:false},{label:Submit your pending surveys,value:Submit your pending surveys,renderStyle:data,enabled:false},{label:Chat with Live Agent,value:Chat with Live Agent,renderStyle:data,enabled:false},{label:My Pending Approvals,value:My Pending Approvals,renderStyle:data,enabled:false},{label:Password Reset,value:Password Reset,renderStyle:data,enabled:false},{label:View My Assets,value:View My Assets,renderStyle:data,enabled:false},{label:Create An Emergency Change,value:Create An Emergency Change,renderStyle:data,enabled:false}],multiSelect:false,openByDefault:true,required:true,maskType:NONE,totalOptionsCount:10,totalSearchResultsCount:0,paginationBreak:-1},nluTextEnabled:false,value:View My Assets}   added the payload string for reference. since this payload is in string format, i'm unable to use spath command.  
Hi looking to create a time chart that has duration on the y axis and start date on the x-axis. The Y- axis is in hours and the x-axis is the start dates for the last ten days.    I am new to Splun... See more...
Hi looking to create a time chart that has duration on the y axis and start date on the x-axis. The Y- axis is in hours and the x-axis is the start dates for the last ten days.    I am new to Splunk so don't really know to do it.