All Topics

Top

All Topics

Hi Team, I am unable to access the GUI after installation is done on the Splunk PROD server. I tried with IP Address, FQDN, Hostname but not working. It's getting worked with the TEST Servers but... See more...
Hi Team, I am unable to access the GUI after installation is done on the Splunk PROD server. I tried with IP Address, FQDN, Hostname but not working. It's getting worked with the TEST Servers but not with PROD servers.   Getting this error when I tried to access the GUI.  
I tried to determine the size of my indexes in preparation for a Splunk Cloud Migration. I figured I could use the "eventcount" command to summarise the used index storage: | eventcount summar... See more...
I tried to determine the size of my indexes in preparation for a Splunk Cloud Migration. I figured I could use the "eventcount" command to summarise the used index storage: | eventcount summarize=true index=* report_size=true | eval sizeGB=round((size_bytes/1024/1024/1024),2) | eval ddasUnits=(sizeGB/500)    Then I let this command also run in one of my existing Splunk Cloud stack and expected the number to roughly match what the SPlunk Cloud Monitoring console would display for DDAS storage usage. But the numbers differ wildly. Eventcount reports something like 323 TB and DDAS usage reports 590 TB.  Wh
Hi,  I am trying to redirect logs only for a specified index of mine to 3rd party. But The target destination is receiving all possible logs.  My input.conf contains several index for testing p... See more...
Hi,  I am trying to redirect logs only for a specified index of mine to 3rd party. But The target destination is receiving all possible logs.  My input.conf contains several index for testing purposed.    inputs.conf [udp://514] sourcetype = syslog index = custom_index [tcp://127.0.0.1:514] sourcetype = syslog index = main [tcp://127.0.0.1:515] sourcetype = tcp_logs index = tcp_custom   my props:    [tcp_logs] // source type stanze from what I've found in the documentaiton. TRANSFORMS-indexAndForward = forward_to_third_party   transforms.conf   [forward_to_third_party] REGEX = .* DEST_KEY = _TCP_ROUTING FORMAT = my_third_party_destination   and output.conf   [tcpout] indexAndForward = true defaultGroup = my_third_party_destination [tcpout:my_third_party_destination] server = 0.0.0.0:9000 sendCookedData = false   could you please advice?  Thanks 
Hi All  I'd like some help please with a query thats been asked of me and its a little out of my depth  the current below query shows year total of helpdesk calls by year    index=mmuh_helpde... See more...
Hi All  I'd like some help please with a query thats been asked of me and its a little out of my depth  the current below query shows year total of helpdesk calls by year    index=mmuh_helpdesk sourcetype=mmuh_helpdesk_json | dedup id | fillnull value=NULL | search "problemtype.detailDisplayName"!=*AGRESSO* | eval problem_detail='problemtype.detailDisplayName' | eval problem_detail=replace(problem_detail, "&#8226","") | eval problem_detail=replace(problem_detail, ";","|") | eval techGroupLevel = 'techGroupLevel.levelName' | eval techGroupLevel = replace(techGroupLevel, " "," ") | eval techGroupLevel = replace(techGroupLevel, " ","") | eval techGroupLevel = replace(techGroupLevel, "Level"," Level") | eval location_Name = 'location.locationName' | eval status = 'statustype.statusTypeName' | eval priority = 'prioritytype.priorityTypeName' | eval techGroupId = 'techGroupLevel.id' | eval tech_Name = 'clientTech.displayName' | top limit=20 date_year | search date_year IN(2020,2021,2022,2023) | eval sort=case(date_year=="2020", "02", date_year=="2021","03", date_year=="2022","04", date_year=="2023","05") | sort sort | fields - sort | fields - percent | eval Year=case(date_year=="2020", "Y 2020", date_year=="2021", "Y 2021", date_year=="2022", "Y 2022", date_year=="2023", "Y 2023") | table Year count i would now like to show years only 2021 and 2022 in line chart  but ... i would like each year to have a separate line  if possible i'd like the time span for each year to be broken into a month  i can get this with the time picker (previous year chosen) and following query below for 2022 index=mmuh_helpdesk sourcetype=mmuh_helpdesk_json | dedup id | fillnull value=NULL | search "problemtype.detailDisplayName"!=*AGRESSO* | eval problem_detail='problemtype.detailDisplayName' | eval problem_detail=replace(problem_detail, "&#8226","") | eval problem_detail=replace(problem_detail, ";","|") | eval techGroupLevel = 'techGroupLevel.levelName' | eval techGroupLevel = replace(techGroupLevel, " "," ") | eval techGroupLevel = replace(techGroupLevel, " ","") | eval techGroupLevel = replace(techGroupLevel, "Level"," Level") | eval location_Name = 'location.locationName' | eval status = 'statustype.statusTypeName' | eval priority = 'prioritytype.priorityTypeName' | eval techGroupId = 'techGroupLevel.id' | eval tech_Name = 'clientTech.displayName' | timechart span=1mon count(id)    how can i get both years show ? thank you       
I am running this in Splunk ES (Enterprise Security). My objective is to find out those savedsearch_name whose average run time is greater than 300. Need some assistance in crafting the logic. Her... See more...
I am running this in Splunk ES (Enterprise Security). My objective is to find out those savedsearch_name whose average run time is greater than 300. Need some assistance in crafting the logic. Here is my SPL so far.... index=_internal sourcetype=scheduler savedsearch_name=* `WF2020_1797_MA_active_es_shd` ACCELERATE run_time>1 | rex field=savedsearch_name "_ACCELERATE_DM_(?<savedsearch_name>.*?)_ACCELERATE_" | search savedsearch_name!="Splunk_SA_CIM_Splunk_Audit.Modular_Actions" savedsearch_name!="Splunk_SA_CIM_Splunk_Audit.View_Activity" | timechart span=5m avg(run_time) AS avg_run_time by savedsearch_name useother=f limit=0
I have data like this,   1st course 2cd course 3rd course 4th course A 3 4 2 2 B 2 5 1 3   I would like this data to look like this using a line graph, I'm trying to u... See more...
I have data like this,   1st course 2cd course 3rd course 4th course A 3 4 2 2 B 2 5 1 3   I would like this data to look like this using a line graph, I'm trying to use this code, but the when i use multiple "over" data, an error occurs. chart avg(x) as A over "1st course", "2cd course" How Could I make this line graph?   
is there any function available in splunk which converts the data in string format to json, which is actually json data but missing quotations.   {uiType:Picker,model:{type:field,name:options_f... See more...
is there any function available in splunk which converts the data in string format to json, which is actually json data but missing quotations.   {uiType:Picker,model:{type:field,name:options_for_teams},plainTextMessage:Please select an option from the below list.,uiMetadata:{label:Please select an option from the below list.,itemType:ID,style:list,options:[{label:Create Service Request,value:Create Service Request,renderStyle:data,enabled:false},{label:Report an Incident,value:Report an Incident,renderStyle:data,enabled:false},{label:Check Incident/Request Status,value:Check Incident/Request Status,renderStyle:data,enabled:false},{label:Self-Service Knowledge Article,value:Self-Service Knowledge Article,renderStyle:data,enabled:false},{label:Submit your pending surveys,value:Submit your pending surveys,renderStyle:data,enabled:false},{label:Chat with Live Agent,value:Chat with Live Agent,renderStyle:data,enabled:false},{label:My Pending Approvals,value:My Pending Approvals,renderStyle:data,enabled:false},{label:Password Reset,value:Password Reset,renderStyle:data,enabled:false},{label:View My Assets,value:View My Assets,renderStyle:data,enabled:false},{label:Create An Emergency Change,value:Create An Emergency Change,renderStyle:data,enabled:false}],multiSelect:false,openByDefault:true,required:true,maskType:NONE,totalOptionsCount:10,totalSearchResultsCount:0,paginationBreak:-1},nluTextEnabled:false,value:View My Assets}   added the payload string for reference. since this payload is in string format, i'm unable to use spath command.  
Hi looking to create a time chart that has duration on the y axis and start date on the x-axis. The Y- axis is in hours and the x-axis is the start dates for the last ten days.    I am new to Splun... See more...
Hi looking to create a time chart that has duration on the y axis and start date on the x-axis. The Y- axis is in hours and the x-axis is the start dates for the last ten days.    I am new to Splunk so don't really know to do it. 
Hi Team, I am unable to restart Splunk on my Search head. Earlier its working fine but now it's not working. Due to this GUI of this Search head is also not able to access getting timed out. ... See more...
Hi Team, I am unable to restart Splunk on my Search head. Earlier its working fine but now it's not working. Due to this GUI of this Search head is also not able to access getting timed out. It's getting stopped here and no response I'm getting. I kept under observation up to 30min but no response. web.conf  
Hi all, I am working on one application which needs to export the records into a CSV using splunk job. i have checked the SID for that job and ran the loadjob command, it showing 70,000 records... See more...
Hi all, I am working on one application which needs to export the records into a CSV using splunk job. i have checked the SID for that job and ran the loadjob command, it showing 70,000 records but when i export got 50,000 records.  checked the savedsearch.conf file for that macro under /opt/splunk/etc/system/default and found that dispatch.max_count = 500000 . so please suggest on this. Thank you. Frontend react js back end : splunk splunk version : 8.2.0
Dear Team, Can you tell us about some possible custom use cases for Imperva WAF?   Regards, Anshul Sharma  
<html> Hello If you look at the search manual, one of the restrictions is the inputlookup command. [Search manual] Restrictions for standard mode federated search Standard mode federated search ... See more...
<html> Hello If you look at the search manual, one of the restrictions is the inputlookup command. [Search manual] Restrictions for standard mode federated search Standard mode federated search does not support the following: Generating commands other than search, from, loadjob, mstats, or tstats. For example, federated searches cannot include the datamodel or inputlookup commands. You can find a list of generating commands in Command types, in the Search Reference. Is this inputlookup a constraint on local Splunk? Or is it a remote restriction? Thank you </html>
Hello Splunkers,  My _internaldb and _introspection indexes are getting bigger and I am wondering if I can delete some events in order to reduce their sizes ? How would I do that ? Is it safe to do ... See more...
Hello Splunkers,  My _internaldb and _introspection indexes are getting bigger and I am wondering if I can delete some events in order to reduce their sizes ? How would I do that ? Is it safe to do it ?  Thanks for your time ! GaetanVP  
News!? It seems we need an updated version for this App compatible with Splunk Cloud (vetted) as it is no longer/lost its Splunk Cloud compatibility (no longer vetted). Is anyone aware of any updat... See more...
News!? It seems we need an updated version for this App compatible with Splunk Cloud (vetted) as it is no longer/lost its Splunk Cloud compatibility (no longer vetted). Is anyone aware of any updates here? Thank you.        
Hello all, I am getting below error in splunk deployment, On checking the splunk internal logs index="_internal" component=HotBucketRoller caller=clusterslavecontrol_rollhotbucket idx=_internal The... See more...
Hello all, I am getting below error in splunk deployment, On checking the splunk internal logs index="_internal" component=HotBucketRoller caller=clusterslavecontrol_rollhotbucket idx=_internal The hotbucket is rolled out to warm by the component HotBucketRoller by clusterslavecontrol_rollhotbucket   07-23-2023 12:39:23.522 -0400 INFO HotBucketRoller [35263 indexerPipe] - finished moving hot to warm bid=_internal~18979~0754D341-849D-4A16-9B2B-322415DA7E29 idx=_internal from=hot_v1_18979 to=db_1690130735_1690130086_18979_0754D341-849D-4A16-9B2B-322415DA7E29 size=3182592 caller=clusterslavecontrol_rollhotbucket isinit=false selective=false the bucket rolled out by this method are too smal size.
I have a HEC token sending various logs from AWS Cloudwatch. HEC token is set to have two indexes paloalto and aws. And all the data is going to the default index aws. I want all the data from the ... See more...
I have a HEC token sending various logs from AWS Cloudwatch. HEC token is set to have two indexes paloalto and aws. And all the data is going to the default index aws. I want all the data from the source=syslogng to be sent to index=paloalto instead of index=aws. Should I change the inputs.conf or props.conf for the splunk_httpinput in the deployment apps?
Hi team, Could you please guide me on how to push/import the dashboard using the client's secret and temporary access token?  Thankyou
Hi folks, I have a very simple alert set up that triggers if the number of results is greater than 0. I'd like to throttle the alert from triggering again for a specified time period, but the thrott... See more...
Hi folks, I have a very simple alert set up that triggers if the number of results is greater than 0. I'd like to throttle the alert from triggering again for a specified time period, but the throttle seems to be ignored. Search: index=sample host=example_host Schedule: Cron - */5 * * * * Trigger: Number of Results > 0 Trigger Once Throttle: Suppress triggering for 10 minutes. Action: Send email. The alert triggers with no problem; however, rather than throttling for 10 minutes, the alert gets triggered again after 5 minutes if the condition is met. It's a simple search where the trigger condition is there being any results at all. What am I doing wrong here? Any help would be greatly appreciated!
1) I want to list top 10 usernames those got most 403 status codes.      for example a username named sigma got 2000 of this code. I want to this username be in the top of the list. 2) I want to li... See more...
1) I want to list top 10 usernames those got most 403 status codes.      for example a username named sigma got 2000 of this code. I want to this username be in the top of the list. 2) I want to list top 10 usernames those got most 403 status code on some obejcts.      for example username named sigma got 2000 of 403 status code on secret object.   fields: username, status_code, object_ref
Hello  I have a distributed Splunk environment and  ThreatQ tip installed in the network. I'm trying to integrate ThreatQ TIP (threat intel platform) with my Splunk distributed environment, but I... See more...
Hello  I have a distributed Splunk environment and  ThreatQ tip installed in the network. I'm trying to integrate ThreatQ TIP (threat intel platform) with my Splunk distributed environment, but I can't find any good resources or walkthroughs. Note: I Have already installed ThreatQ app and add-on on the Splunk search head.