All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We have to import a csv file that always contains the same amount of column (and corresponding values), but the system that generates it sometimes change the order of the header columns, like... See more...
Hello, We have to import a csv file that always contains the same amount of column (and corresponding values), but the system that generates it sometimes change the order of the header columns, like this:   File01.csv field01,field02,field03       File02.csv field03,field01,field02     Is there any way to ingest the file without using in props.conf this set-up? INDEXED_EXTRACTIONS=csv   The reason is that using the INDEXED_EXTRACTIONS Splunk is adding those fields in the .tsidx and we would like to avoid that.   Thanks a lot, Edoardo
Hi Splunkers, I would like to calculate the duration of an event as a percentage of the day. I have data in a database that is being extracted, one of the fields is duration; DURATION="01:00:00" ... See more...
Hi Splunkers, I would like to calculate the duration of an event as a percentage of the day. I have data in a database that is being extracted, one of the fields is duration; DURATION="01:00:00" As this is already in human readable format, I thought i would convert it to epoch to sum and i got the returned value; 06:00:00 So far so good, or so I thought, but looking at the percentages things were not quite right. So i included the epoch in the results and it showed me this; 20412540000  (Wed Nov 06 2616 06:00:00 GMT+0000)       | eval DURATION=strptime(DURATION,"%H:%M:%S") | stats sum(DURATION) as event_duration by NAME | eventstats sum(event_duration) as total_time | eval percentage_time=(event_duration/total_time)*100 | eval event_duration1=strftime(event_duration,"%H:%M:%S") | eval total_time1=strftime(total_time,"%H:%M:%S") | eval av_time_hrs=(event_duration1/total_time1)         based on the data is it possible to get a percentage?
Hi, Is there someone here who can create an XML regular expression for these events to prevent them from being ingested into Splunk? 1. Sample Event: <Event xmlns='http://schemas.microsoft.com... See more...
Hi, Is there someone here who can create an XML regular expression for these events to prevent them from being ingested into Splunk? 1. Sample Event: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{XXXXX}'/><EventID>4688</EventID><Version>2</Version><Level>0</Level><Task>13312</Task><xxx>0</Opcode><Keywords>xxxxx</Keywords><TimeCreated SystemTime='2023-11-27'/><EventRecordID>151284011</EventRecordID><Correlation/><Execution ProcessID='4' ThreadID='8768'/><Channel>Security</Channel><Computer>XXX.com</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>xxx\SYSTEM</Data><Data Name='SubjectUserName'>XXX$</Data><Data Name='SubjectDomainName'>EC</Data><Data Name='SubjectLogonId'>xxx</Data><Data Name='NewProcessId'>0x3878</Data><Data Name='NewProcessName'>C:\Program Files (x86)\Tanium\Tanium Client\Patch\tools\TaniumExecWrapper.exe</Data><Data Name='TokenElevationType'>%%xxxx</Data><Data Name='ProcessId'>xxxx</Data><Data Name='CommandLine'></Data><Data Name='TargetUserSid'>NULL SID</Data><Data Name='TargetUserName'>-</Data><Data Name='TargetDomainName'>-</Data><Data Name='TargetLogonId'>xxx</Data><Data Name='ParentProcessName'>C:\Program Files (x86)\Tanium\Tanium Client\TaniumClient.exe</Data><Data Name='MandatoryLabel'>Mandatory Label\System Mandatory Level</Data></EventData></Event> THANKS
Hi Splunk Enthusiasts, I have created a table using Splunk Dashboard Studio. In that a column contains results like 0 and some other number. 0 is displayed on left where as all other values displays... See more...
Hi Splunk Enthusiasts, I have created a table using Splunk Dashboard Studio. In that a column contains results like 0 and some other number. 0 is displayed on left where as all other values displays is aligned right. Can you please help me to make it all align left. TIA !! PFB screenshot for your reference.
In the below screenshot, we can see that from November 6th onwards, there are three sources generated in Splunk; it shows only one "File Collector: DepTrayCaseQty." Splunk created unnecessary two oth... See more...
In the below screenshot, we can see that from November 6th onwards, there are three sources generated in Splunk; it shows only one "File Collector: DepTrayCaseQty." Splunk created unnecessary two other sources. Because of the creation of two other sources, unwanted duplicate events were also generated. "D:\Splunk\var\spool\splunk\adb0f8d721bf93e3_events.stash_new" and "D:\Splunk\var\spool\splunk\d0d3783e41cf130c_events.stash_new" . Please guide us on how I can fix this issue.   My assumption : Is collect command is not working fine? How to prevent both of those sources from being ingested into Splunk ?
Hello Team, I was trying to parse my data by updating props.conf file i have created this file in (C:\Program Files\SplunkUniversalForwarder\etc\system\local\props.conf)  [t24protocollog] DATE... See more...
Hello Team, I was trying to parse my data by updating props.conf file i have created this file in (C:\Program Files\SplunkUniversalForwarder\etc\system\local\props.conf)  [t24protocollog] DATETIME_CONFIG=CURRENT SHOULD_LINEMERGE=false LINE_BREAKER=^@ID[\s\S]*?REMARK.*$ NO_BINARY_CHECK=true disabled=false by using above configuration file.  When I'm using the regex it separate each line as a event instead of defining starting and endpoint of event. I checked this regex on regex 101 it is giving me proper result on regex101.com but not on Splunk. Attaching the screenshot how my logs are showing on SPLUNK GUI. Below is the content of log file. -------------------------------------- LIST F.PROTOCOL @ID PROTOCOL.ID PROCESS.DATE TIME.MSECS K.USER APPLICATION LEVEL.FUNCTION ID REMARK PAGE 1 11:34:02 23 NOV 2023 @ID............ 202403.16 @ID............ 202403.16 PROTOCOL.ID.... 202403.16 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:23:649 K.USER......... INPUTTER APPLICATION.... DC.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... QUIRY - AC.REPORT @ID............ 202303.16 @ID............ 202303.16 PROTOCOL.ID.... 202303.16 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:23:649 K.USER......... INPUTTER APPLICATION.... AC.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.REPORT Kindly do let me know if im doing anything wrong, need to parse the log file.
Greetings Team, I am trying to implement the Java Agent onto the Webmethods integration Server, following the steps here, The document referred to does not provide a way to pass the argument onto th... See more...
Greetings Team, I am trying to implement the Java Agent onto the Webmethods integration Server, following the steps here, The document referred to does not provide a way to pass the argument onto the runtime.sh or the server.sh onto the files respectively. Has anyone performed it yet? If yes, please let me know the argument that will be needing to add in order to get this working effectively on a Windows 2019 server  Regards, Shashwat
Hi Team, I am trying to create a search which show me the list of all sourcetype and index which are not in use or let's say zero/less events from last few days. Can you please advise. Thank... See more...
Hi Team, I am trying to create a search which show me the list of all sourcetype and index which are not in use or let's say zero/less events from last few days. Can you please advise. Thanks   
Thanks in Advance. I had call from one company and they asked you have experience in Splunk Ingestion. I thought is data onboarding from GUI right? or something different?
We have 24 indexers in an indexer cluster. Recently the CPU usage is almost 100%, not on all the indexers but it fluctuates between the indexers. Under indexer clustering section, I can see the statu... See more...
We have 24 indexers in an indexer cluster. Recently the CPU usage is almost 100%, not on all the indexers but it fluctuates between the indexers. Under indexer clustering section, I can see the status going to "Pending" randomly between the indexers for few seconds. It is very continuous and also causes an increase in the number of fixup buckets.  I have restarted the indexer servers manually where I saw high CPU load, but it did not resolve the issue. What would be the best option to fix this and the possible root cause?  Any suggestions would be very helpful.  Thanks in advance!
Hello team.  My task is that universal forwarder should collect the events from other hosts and then do realy to main server. How can i do it? 
hello ,  i have a problem i want to calculate a persoas coefficient to do correlation by the loop but i have a big issue . i have more than 23 fields and do the calculation manually making waste tim... See more...
hello ,  i have a problem i want to calculate a persoas coefficient to do correlation by the loop but i have a big issue . i have more than 23 fields and do the calculation manually making waste time a lot and i have a big syntax. someone know how i can get loop result without using MLTOOKIT.  | fields TotalCount, usages, licenseb, StorageMb, Role_number, Siglum_number, SourceTypeDescription_number *_number | eval sq_TotalCount = TotalCount * TotalCount | eval sq_usages = usages * usages | eval sq_licenseb = licenseb * licenseb | eval sq_StorageMb = StorageMb * StorageMb | eval sq_Role_number = Role_number * Role_number | eval sq_Siglum_number = Siglum_number * Siglum_number | eval sq_SourceTypeDescription_number = SourceTypeDescription_number * SourceTypeDescription_number | eval product_TotalCount_usages = TotalCount * usages | eval product_TotalCount_licenseb = TotalCount * licenseb | eval product_TotalCount_StorageMb = TotalCount * StorageMb | eval product_TotalCount_Role_number = TotalCount * Role_number | eval product_TotalCount_Siglum_number = TotalCount * Siglum_number | eval product_TotalCount_SourceTypeDescription_number = TotalCount * SourceTypeDescription_number | stats sum(TotalCount) as sum_TotalCount, sum(sq_TotalCount) as sum_sq_TotalCount, sum(usages) as sum_usages, sum(sq_usages) as sum_sq_usages, sum(licenseb) as sum_licenseb, sum(sq_licenseb) as sum_sq_licenseb, sum(StorageMb) as sum_StorageMb, sum(sq_StorageMb) as sum_sq_StorageMb, sum(Role_number) as sum_Role_number, sum(sq_Role_number) as sum_sq_Role_number, sum(Siglum_number) as sum_Siglum_number, sum(sq_Siglum_number) as sum_sq_Siglum_number, sum(SourceTypeDescription_number) as sum_SourceTypeDescription_number, sum(sq_SourceTypeDescription_number) as sum_sq_SourceTypeDescription_number sum(product_TotalCount_usages) as sum_TotalCount_usages, sum(product_TotalCount_licenseb) as sum_TotalCount_licenseb, sum(product_TotalCount_StorageMb) as sum_TotalCount_StorageMb, sum(product_TotalCount_Role_number) as sum_TotalCount_Role_number, sum(product_TotalCount_Siglum_number) as sum_TotalCount_Siglum_number, sum(product_TotalCount_SourceTypeDescription_number) as sum_TotalCount_SourceTypeDescription_number count as count | eval pearson_TotalCount_usages = ((count * sum_TotalCount_usages) - (sum_TotalCount * sum_usages)) / (sqrt((count * sum_sq_TotalCount) - (sum_TotalCount * sum_TotalCount)) * sqrt((count * sum_sq_usages) - (sum_usages * sum_usages))), pearson_TotalCount_licenseb = ((count * sum_TotalCount_licenseb) - (sum_TotalCount * sum_licenseb)) / (sqrt((count * sum_sq_TotalCount) - (sum_TotalCount * sum_TotalCount)) * sqrt((count * sum_sq_licenseb) - (sum_licenseb * sum_licenseb))), pearson_TotalCount_StorageMb = ((count * sum_TotalCount_StorageMb) - (sum_TotalCount * sum_StorageMb)) / (sqrt((count * sum_sq_TotalCount) - (sum_TotalCount * sum_TotalCount)) * sqrt((count * sum_sq_StorageMb) - (sum_StorageMb * sum_StorageMb))), pearson_TotalCount_Role_number = ((count * sum_TotalCount_Role_number) - (sum_TotalCount * sum_Role_number)) / (sqrt((count * sum_sq_TotalCount) - (sum_TotalCount * sum_TotalCount)) * sqrt((count * sum_sq_Role_number) - (sum_Role_number * sum_Role_number))), pearson_TotalCount_Siglum_number = ((count * sum_TotalCount_Siglum_number) - (sum_TotalCount * sum_Siglum_number)) / (sqrt((count * sum_sq_TotalCount) - (sum_TotalCount * sum_TotalCount)) * sqrt((count * sum_sq_Siglum_number) - (sum_Siglum_number * sum_Siglum_number))), pearson_TotalCount_SourceTypeDescription_number = ((count * sum_TotalCount_SourceTypeDescription_number) - (sum_TotalCount * sum_SourceTypeDescription_number)) / (sqrt((count * sum_sq_TotalCount) - (sum_TotalCount * sum_TotalCount)) * sqrt((count * sum_sq_SourceTypeDescription_number) - (sum_SourceTypeDescription_number * sum_SourceTypeDescription_number))) | table pearson_TotalCount_usages, pearson_TotalCount_licenseb, pearson_TotalCount_StorageMb, pearson_TotalCount_Role_number, pearson_TotalCount_Siglum_number, pearson_TotalCount_SourceTypeDescription_number
The code for this issue is here: https://github.com/NathanDotTo/structurizr-onpremises/blob/main/structurizr-onpremises/Dockerfile_service I am using the AppD agent within a Tomcat based web app. Th... See more...
The code for this issue is here: https://github.com/NathanDotTo/structurizr-onpremises/blob/main/structurizr-onpremises/Dockerfile_service I am using the AppD agent within a Tomcat based web app. The agent directory is copied into the container unaltered from the original zip file: ENV APPDAGENTDIR=AppServerAgent-1.8-23.10.0.35234 ADD $APPDAGENTDIR /$APPDAGENTDIR RUN chown -R root /$APPDAGENTDIR RUN chmod -R a+rwx /$APPDAGENTDIR I start the web app with: ENV CATALINA_OPTS="-Xms512M -Xmx512M -javaagent:/AppServerAgent-1.8-23.10.0.35234/javaagent.jar" I get this error: >>>> MultiTenantAgent Dynamic Service error - could not open Dynamic Service Log /AppServerAgent-1.8-23.10.0.35234/ver23.10.0.35234/logs/8b60cbc478b0/argentoDynamicService_11-27-2023-08.17.53.log Running as user root Cannot write to parent folder /AppServerAgent-1.8-23.10.0.35234/ver23.10.0.35234/logs/8b60cbc478b0 Could NOT get owner for MultiTenantAgent Dynamic Services Folder Likely due to fact that owner (null) is not same user as the runtime user (root) which means you will need to give group write access using this command: find external-services/argentoDynamicService -type d -exec chmod g+w {} Possibly due to lack of permissions or file access to folder: Exists: false, CanRead: false, CanWrite: false Possibly due to lack of permissions or file access to log: Exists: false, CanRead: false, CanWrite: false Possibly due to java.security.Manager set - null Possibly due to missed agent-runtime-dir in Controller-XML and will need the property set to correct this... Call Stack: java.io.FileNotFoundException: /AppServerAgent-1.8-23.10.0.35234/ver23.10.0.35234/logs/8b60cbc478b0/argentoDynamicService_11-27-2023-08.17.53.log (No such file or directory) From within the container I can see that the logs directory is owned by root: cd /AppServerAgent-1.8-23.10.0.35234/ver23.10.0.35234/logs/ root@8b60cbc478b0:/AppServerAgent-1.8-23.10.0.35234/ver23.10.0.35234/logs# ls -la total 16 drwxrwxrwx 1 root root 4096 Nov 27 08:20 . drwxrwxrwx 1 root root 4096 Oct 27 20:45 .. drwxr-x--- 2 root root 4096 Nov 27 08:20 Tomcat@8b60cbc478b0_8005 root@8b60cbc478b0:/AppServerAgent-1.8-23.10.0.35234/ver23.10.0.35234/logs# Since the logs directory is clearly owned by root, I suspect that the error message is simply misleading.  Any suggestions please? Many thanks Nathan
I was referring to this video https://www.youtube.com/watch?v=Dv_lp-aHnv8   but no events found at the event summary page.      this is setup and Migration page. I installed Splunk in ... See more...
I was referring to this video https://www.youtube.com/watch?v=Dv_lp-aHnv8   but no events found at the event summary page.      this is setup and Migration page. I installed Splunk in a local environment, so I filled HEC Host and Port with default values(localhost, 8088).  Please tell me if I'm doing something wrong. 
Hi Community, Hope you are doing well. We have set the retention of each index for 1 year. (6 months data is searchable (Hot Mount or Cold Mount) and 6 months data is frozen (Archive Mount)) due to... See more...
Hi Community, Hope you are doing well. We have set the retention of each index for 1 year. (6 months data is searchable (Hot Mount or Cold Mount) and 6 months data is frozen (Archive Mount)) due to our compliance. Now i need to identify the Oldest data age for each index Hot Warm and Frozen. is the data bucket is present for one year or not in our mounts points ?  Share the command to identify the buckets age for each index Regards, Mehboob
Hello Splunk experts,  I'm pretty new to splunk and I would like your help in forming a query for the following requirement. I would like to create a bar chart for each OEM (total of 5 seperate b... See more...
Hello Splunk experts,  I'm pretty new to splunk and I would like your help in forming a query for the following requirement. I would like to create a bar chart for each OEM (total of 5 seperate bar charts widgets since we have 5 OEMs) based on the completion progress of NCAPTest. So, these events will be pushed to Splunk every Monday. The x-axis should show the timestamp(_time) in the following format(YYYY-MM-DD) and the y axis should show stacked bar graph where bottom portion of the bar should show completed count(NCAPTest=Yes) along with the completion percentage and the top portion should show the remaining count(NCAPTest=No). This is how the data looks like: 6 Nov, 2023 events: OEM Model Type NCAPTest Honda Civic Sedan No Honda CR-V SUV Yes Honda Fit Hatchback No VW Jetta Sedan Yes VW Tiguan SUV Yes VW Golf Hatchback No Tata Harrier SUV Yes Tata Tiago Hatchback No Tata Altroz Hatchback No Kia Seltos SUV No Kia Forte Sedan No Kia Rio Hatchback No Hyundai Elantra Sedan No Hyundai Kona SUV Yes Hyundai i20 Hatchback No   13 Nov 2023 events: Honda Civic Sedan Yes Honda CR-V SUV Yes Honda Fit Hatchback No VW Jetta Sedan Yes VW Tiguan SUV Yes VW Golf Hatchback No Tata Harrier SUV Yes Tata Tiago Hatchback No Tata Altroz Hatchback Yes Kia Seltos SUV No Kia Forte Sedan Yes Kia Rio Hatchback Yes Hyundai Elantra Sedan No Hyundai Kona SUV Yes Hyundai i20 Hatchback No   20 Nov 2023 events: Honda Civic Sedan Yes Honda CR-V SUV Yes Honda Fit Hatchback Yes VW Jetta Sedan Yes VW Tiguan SUV Yes VW Golf Hatchback Yes Tata Harrier SUV Yes Tata Tiago Hatchback Yes Tata Altroz Hatchback Yes Kia Seltos SUV Yes Kia Forte Sedan Yes Kia Rio Hatchback Yes Hyundai Elantra Sedan Yes Hyundai Kona SUV Yes Hyundai i20 Hatchback Yes   Any help is greatly appreciated. 
Hello, I am trying to install Splunk onto Unbuntu server in Splunk. I cannot find CLI to do it  
Hi, I have log which the field name is called "name". The regex cannot get the hostname from the name field because have multiple scenario. Eg as below: (DR) HostA-AIX-172.0.0.0-root 01-HostA-10-C... See more...
Hi, I have log which the field name is called "name". The regex cannot get the hostname from the name field because have multiple scenario. Eg as below: (DR) HostA-AIX-172.0.0.0-root 01-HostA-10-Cambodia-Cisco_Router-10.0.0.0-root1 172.0.0.0-Malaysia-Windows Server 2016-HostA-admin 172.0.0.0 - HostA-Indonesia-Win2012-172.0.0.0-admin 3D-(DR) HostA-Win2003-172.0.0.0 [NAT IP 192.0.0.0] (dmin) AD-HostA.local-srv_AB_CDD HostA-India-Solaris10-172.0.0.0-root These are the sample inconsistent log that we need to get Hostname. The highlighted one should we get for the hostname. Please assist on this by creating new regex
Configure Universal forwarder to monitor a file and send to splunk cloud via HEC. By using curl, I'm able to hit the splunk cloud and I'm able to see the result but not sure how to configure Univers... See more...
Configure Universal forwarder to monitor a file and send to splunk cloud via HEC. By using curl, I'm able to hit the splunk cloud and I'm able to see the result but not sure how to configure Universal forwarder.
Hey, Can someone please help me in building a query for user accessing webpage despite warning sign from proxy? @splunk