All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, Any idea what type of logs we can onboard for WSL2 and how we can do that.
Hello Splunkers!   We have a situation here and need your help and experience. We are looking for best practice to work with Large CSV files (1Million Rows at least) to produce fast searches and fa... See more...
Hello Splunkers!   We have a situation here and need your help and experience. We are looking for best practice to work with Large CSV files (1Million Rows at least) to produce fast searches and fast dashboards.   The case is also special as these CSV files is updated daily on the below manner: It's a daily generated report from another system and this is the only why to send data to Splunk. It could have modification as  (new Rows with new data/ new modified values of old data/ OR Full Remove of some Rows) So, we need to update Splunk daily on the change of the files.   The only was I can see is to remove the index data and re-index the CSV files everyday!   I don't know actually how to do that if we need to automate the whole process or if there is a best practice better than this approach.   Appreciate your help.
I have a number of events searchable by: index=main sourcetype="myevents" All of them show foo field with value bar When adding that as a filter to my query: index=main sourcetype="myevents" foo=... See more...
I have a number of events searchable by: index=main sourcetype="myevents" All of them show foo field with value bar When adding that as a filter to my query: index=main sourcetype="myevents" foo=bar no results are returned. If I update above to index=main sourcetype="myevents" foo="bar*" I do get results. Any pointers on how to see the real value of foo? PS I couldn't print foo as list of characters. The closest I got was | rex field=foo mode=sed "s/(.)/\1+/g" which printed b+a+r+ note the last character seems to 'r'
Hi  I'm a beginner at Splunk and am running into a problem with lookups. I have indexed IIS data in one sourcetype called 'iis' and uploaded a lookup csv called 'cve-ip'  which is defined already. ... See more...
Hi  I'm a beginner at Splunk and am running into a problem with lookups. I have indexed IIS data in one sourcetype called 'iis' and uploaded a lookup csv called 'cve-ip'  which is defined already. Trying to corelate and find matches for the 'src_ip' column in the 'cve-ip' file with the 'c-ip' field in the 'iis' source type. Here is the SPL lookup I am running :  sourcetype=iis | lookup cve-ip src_ip AS c_ip  This SPL is returning all events in the 'iis' sourcetype and not only those which match the values in the 'src_ip' column in the 'cve-ip' file. I would like some help figuring out why this is happening and returning all events as opposed to just matching events. Happy to provide any details as needed
    I need to get a top 10 of the users who use Splunk the most
Hi,   I have applications that log login events as multiple events. Example: [07B0:007E-19E8] 2021.03.17 11:59:01 Opened session for User Name/HEXP/HU (Release 8.0.2FP6) [07B0:007E-19E8] 2021.03... See more...
Hi,   I have applications that log login events as multiple events. Example: [07B0:007E-19E8] 2021.03.17 11:59:01 Opened session for User Name/HEXP/HU (Release 8.0.2FP6) [07B0:007E-19E8] 2021.03.17 11:59:01 ATTEMPT TO ACCESS SERVER by User Name/HEXP/HU was denied [07B0:007E-1408] 2021.03.17 11:59:01 Closed session for User Name/HEXP/HU Databases accessed: 0 Documents read: 0 Documents written: 0 This is an unsuccessful login event. when the login is successful, only the first event is logged. I can connect these events with transaction, which is ok for some reporting purposes. But if I use transaction then I can't tag these events and I can't make the logs CIM compliant. Is there a way to handle these kind of situations?  Or it is not possible to tag these kind of events correctly?   Thanks, László  
Where do I find already built in Dashboards in Splunk Enterprise & ES
Hello! I am following this documentation and I am keen on re-ingestion of Failed AWS Firehose requests out via AWS SNS/SQS service using the Splunk AWS Add-On. https://www.splunk.com/en_us/blog/tip... See more...
Hello! I am following this documentation and I am keen on re-ingestion of Failed AWS Firehose requests out via AWS SNS/SQS service using the Splunk AWS Add-On. https://www.splunk.com/en_us/blog/tips-and-tricks/aws-firehose-to-splunk-two-easy-ways-to-recover-those-failed-events.html Problem: When I receive a failure message from Firehose, my lambda code strips the Kinesis meta data from to the original format. Now, if I send this to splunk  (through the way the above document guides i.e. SNS/SQS and then Splunk AWS Add-On), it does not do the correct parsing at sourcetype level. I would like an example of what the request that is sent through the AWS SNS/SQS and Splunk AWS Add-On is supposed to look like to get over the parsing issue at sourcetype level.
Hi All, I have installed and setup JMS Modular input add-on on Splunk HF and given below inputs- Activation key JNDI class name  Message queue/topic name Username  password Also set the JAVA H... See more...
Hi All, I have installed and setup JMS Modular input add-on on Splunk HF and given below inputs- Activation key JNDI class name  Message queue/topic name Username  password Also set the JAVA HOME path.   I am getting below error-   All the required jar files comes with Add-on itself , that's what get to know from the documentation. Had anyone face this issue while connection. Please help.
Hey Splunkers,   Anyone using Splunk with MANHATTAN ACTIVE  WAREHOUSE MANAGEMENT ?
Hi, I am stuck with this from last few days and i really need some help. M trying to create a gauge for displaying the uptime of an object. I have this query for checking the current status(last 5... See more...
Hi, I am stuck with this from last few days and i really need some help. M trying to create a gauge for displaying the uptime of an object. I have this query for checking the current status(last 5 min) of this object whether it is Running or not.(10 for running and 0 for Not).  | eval Indicator=if(state=="RUNNING", "10", "0") | timechart span=5min min(Indicator) as "Trend" | eventstats latest(_time) as current | where current=_time | eval SI=if(Trend==0,"Currently Down","UP") If the value of SI is "Currently Down", then just display that. And if it is "UP" then need to do some calculations for the uptime. I have the query like below. | eval Indicator=if(state=="RUNNING", "10", "0") | timechart span=5min min(Indicator) as "Trend" | eval DownTime=if(Trend==0,_time,null()) ,current_time=now() | where isnotnull(DownTime) | eventstats latest(_time) as current | where current=_time | eval diff= (current_time-DownTime) ,Days=diff/86400 ,Days=if(match('Days',"^[\d\.]*$"),floor('Days'),'Days') ,mod1 = (diff%86400) ,Hours=mod1/3600 ,Hours=if(match('Hours',"^[\d\.]*$"),floor('Hours'),'Hours') ,mod2 = (diff%3600) , Minutes=mod2/60 ,Minutes=if(match('Minutes',"^[\d\.]*$"),floor('Minutes'),'Minutes') ,Seconds = (diff%60) | eval UpTime = Days." Days, ".Hours." Hours, ".Minutes." Minutes, ".Seconds." Seconds" | table UpTime Can someone please help me to merge these 2 queries to one so that if currently the state is not running it will show as "Currently Down" else it should show the uptime. 
What are Splunk Enterprise & ES vital signs should be checked daily by an Admin to keep Splunk & ES smiling 24x7 ? What do you do to take Splunk Ent. & ES's pulse to keep it humming ? Thank you for y... See more...
What are Splunk Enterprise & ES vital signs should be checked daily by an Admin to keep Splunk & ES smiling 24x7 ? What do you do to take Splunk Ent. & ES's pulse to keep it humming ? Thank you for your expert advice.
Hi all, i have been trying to extract error code which is alphanumeric and is delimited as per below but not able to extract with the rex due to the unstructured fields, will there be any way to ext... See more...
Hi all, i have been trying to extract error code which is alphanumeric and is delimited as per below but not able to extract with the rex due to the unstructured fields, will there be any way to extract this fields to do a timechart on the error codes.any help pls sample piece of log error=30578910//=404.EBS.SYSTEM.101:6NAHKFZA//=404.IMS.SERVERIN.103:2GSO0LPT//=404.IES.SERVER.105:5X3HSH18M//=404.IES.SERVEROUT.105,missingFulfillmentItems required output  404.EBS.SYSTEM.101 404.IMS.SERVERIN.103 404.IES.SERVER.105 404.IES.SERVEROUT.105
I have a glass table which I want to add as a drill down to a button. So I have specified a URL of the glass table to the button click. However, I want the glass table to be opened in Full Screen mod... See more...
I have a glass table which I want to add as a drill down to a button. So I have specified a URL of the glass table to the button click. However, I want the glass table to be opened in Full Screen mode by default.  So how it is possible to do this?
I have several Windows servers that the host=$decideOnStartup, but other Windows events correctly provide the Windows host name.   Any ideas why and how to correct this?
Hi all. I need some help to index all data coming into one server and only forward 3 sourcetypes to a 2nd server. Receiving and indexing the data is not a problem, but I cannot seem to get the 3 sour... See more...
Hi all. I need some help to index all data coming into one server and only forward 3 sourcetypes to a 2nd server. Receiving and indexing the data is not a problem, but I cannot seem to get the 3 sourcetypes to the 2nd server. Any help would be appreciated.  My props.conf   [cisco:asa] TRANSFORMS-routing=gsoc [icsp] TRANSFORMS-routing=gsoc [syslog] TRANSFORMS-routing=gsoc     transforms.conf   [gsoc] REGEX=. DEST_KEY=_TCP_ROUTING FORMAT=gsocPrimary     and outputs.conf   [tcpout] defaultGroup=nothing indexAndForward=true [tcpout:gsocPrimary] server=*.*.*.*:9997  
Hello i have two lookup tables  from the first i want to take the field "created_by" from the second i want to compare the "created_by" fields from the first lookup and check if the field "descrip... See more...
Hello i have two lookup tables  from the first i want to take the field "created_by" from the second i want to compare the "created_by" fields from the first lookup and check if the field "description" is empty or not. if empty i want to raise an alert. How can i join this two lookups? thanks
Hi All, Kindly need your advise, I need to setup indexer cluster on my customer env, but unfortunately they don't have any other server to act as manager node server,  my question is, can I enable m... See more...
Hi All, Kindly need your advise, I need to setup indexer cluster on my customer env, but unfortunately they don't have any other server to act as manager node server,  my question is, can I enable manager node role on search head server? is there any impact to search head function? Thanks,  
I am having Structure data files for which I did field extraction using Splunk field delimiter in development box. when I packaged the app and placed it in production it is not working. I checked th... See more...
I am having Structure data files for which I did field extraction using Splunk field delimiter in development box. when I packaged the app and placed it in production it is not working. I checked the permission and it is global. data looks like this  file name windows_patch.log Step_Execution_Time~^~Applications~^~Server~^~Step_Name~^~Step_Status~^~Step_Logs~^~Step_Comment 13-01-2021 12:09:39 PM~^~SAP,SQL,Oracle~^~test2k19.testmbs.com~^~Connect to WSUS~^~Success~^~WinRM service is already running on this machine.\r\nWinRM is already set up for remote management on this computer.\r\n~^~ Connected to WSUS cidsuswuraeuw02.testmbs.com successfully. 13-01-2021 12:09:41 PM~^~SAP,SQL,Oracle~^~test2k19.testmbs.com~^~Loading PowerShell Modules on Target Host~^~Success~^~\nPowershell Output:\n~^~Fetch patches details successfully to apply on Target Hosttest2k19.testmbs.com. Props.conf [Windows_Pre_Patching] REPORT-Patch-Windows_Pre_Patching = REPORT-Patch-Windows_Pre_Patching   transforms.conf [REPORT-Patch-Windows_Pre_Patching] DELIMS = "~^~" FIELDS = "Step_Execution_Time","field2","field3","Applications","field5","field6","Server","field8","field9","Step_Name","field11","field12","Step_Status","field14","field15","Step_Logs","field17","field18","Step_Comment","Step_Comment"   please guide
  I have a query like this where i group by REQUEST_ID   eventtype=sfdc-event-log EVENT_TYPE="ApexTrigger" REQUEST_ID!="" | stats sum(EXEC_TIME) as e1, min(TIMESTAMP_DERIVED) as e2 by REQUEST_... See more...
  I have a query like this where i group by REQUEST_ID   eventtype=sfdc-event-log EVENT_TYPE="ApexTrigger" REQUEST_ID!="" | stats sum(EXEC_TIME) as e1, min(TIMESTAMP_DERIVED) as e2 by REQUEST_ID | eval e1=e1/1000 | sort -e1   I would like to add a new field in this output called TRIGGER_TYPE and display only that trigger_type from each group which has the minimum TIMESTAMP_DERIVED field (e2). (Note that TIMESTAMP_DERIVED is my custom timestamp field)   I see I can get a list of all the trigger types in each group with list(TRIGGER_TYPE) but i only want the TRIGGER_TYPE which has a specific value for the TIMESTAMP_DERIVED field. Any ideas on how this can be achieved?