All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I can't quiet tell what is the input data and how Splunk is splitting. Do you want separate events for each time you have Feb 13 etc? If so provide a props for your indexers to say that the event st... See more...
I can't quiet tell what is the input data and how Splunk is splitting. Do you want separate events for each time you have Feb 13 etc? If so provide a props for your indexers to say that the event starts with the date at the beginning of the line etc.  
Hello, I have a set of Grade (Math, English, Science) data for Student1 and Student2 from 2/8/2024  to 3/1/2024 How to display timechart multivalues without colon? The complete search is down ... See more...
Hello, I have a set of Grade (Math, English, Science) data for Student1 and Student2 from 2/8/2024  to 3/1/2024 How to display timechart multivalues without colon? The complete search is down below.   Thank you so much for your help. This is the result with colon Is it possible to display the data like the following?  Should I parse the data to get the display like below or is there a better way to do this? Student Grades 2/8/2024 2/15/2024 2/22/2024 2/29/2024 Student1 EnglishGrade 10 7 7 10 Student1 MathGrade 10 7 7 10 Student1 ScienceGrade 10 7 7 10 Student2 EnglishGrade 9 6 7 9 Student2 MathGrade 9 6 7 9 Student2 ScienceGrade 9 6 7 9 Here's the search   | makeresults format=csv data="_time,Student,MathGrade,EnglishGrade,ScienceGrade 1707368400,Student1,10,10,10 1707454800,Student1,9,9,9 1707541200,Student1,8,8,8 1707627600,Student1,7,7,7 1707714000,Student1,6,6,6 1707800400,Student1,5,5,5 1707886800,Student1,6,6,6 1707973200,Student1,7,7,7 1708059600,Student1,8,8,8 1708146000,Student1,9,9,9 1708232400,Student1,10,10,10 1708318800,Student1,10,10,10 1708405200,Student1,9,9,9 1708491600,Student1,8,8,8 1708578000,Student1,7,7,7 1708664400,Student1,6,6,6 1708750800,Student1,5,5,5 1708837200,Student1,6,6,6 1708923600,Student1,7,7,7 1709010000,Student1,8,8,8 1709096400,Student1,9,9,9 1709182800,Student1,10,10,10 1709269200,Student1,10,10,10 1707368400,Student2,9,9,9 1707454800,Student2,5,5,5 1707541200,Student2,6,6,6 1707627600,Student2,7,7,7 1707714000,Student2,8,8,8 1707800400,Student2,9,9,9 1707886800,Student2,5,5,5 1707973200,Student2,6,6,6 1708059600,Student2,7,7,7 1708146000,Student2,8,8,8 1708232400,Student2,9,9,9 1708318800,Student2,9,9,9 1708405200,Student2,5,5,5 1708491600,Student2,6,6,6 1708578000,Student2,7,7,7 1708664400,Student2,8,8,8 1708750800,Student2,9,9,9 1708837200,Student2,5,5,5 1708923600,Student2,6,6,6 1709010000,Student2,7,7,7 1709096400,Student2,8,8,8 1709182800,Student2,9,9,9 1709269200,Student2,9,9,9" | table _time, Student, MathGrade, EnglishGrade, ScienceGrade | timechart span=1w first(MathGrade) as MathGrade, first(EnglishGrade) as EnglishGrade, first(ScienceGrade) as ScienceGrade by Student useother=f limit=0 | eval _time = strftime(_time,"%m/%d/%Y") | fields - _span _spandays | transpose 0 header_field=_time column_name=Grades    
Not yet no
Hi so how is are the events being split by Splunk? And do you have any props to tell splunk how to split the events?
@burwell wrote: Hi so what's the patching schedule? Every 28 days starting in Feb 1? Sorry, yes. Every 28 days starting Feb 1. 
Hi so what's the patching schedule? Every 28 days starting in Feb 1?
Hello everyone, I am trying to use Splunk to create an ongoing patching countdown that will be Single Value (Days Until Patch) on my Dashboard. How can I go about accomplishing this? I was able to ca... See more...
Hello everyone, I am trying to use Splunk to create an ongoing patching countdown that will be Single Value (Days Until Patch) on my Dashboard. How can I go about accomplishing this? I was able to calculate 1 patch cycle, but I am not sure how to get it to recalculate for every month. Right now for example, it is telling me the next patch date is 2/29/2024. Hoping someone already has a solution built out. Thank you for any assistance!    This is what I have so far: | makeresults | eval start= strptime("02-01-2024", "%m-%d-%Y") | eval startStr=strftime(start, "%D") | eval PatchDate = relative_time(start ,"+28d") | eval PatchDateString= strftime(PatchDate, "%D") | eval PriorPatchDate = relative_time(start ,"-28d") | eval PriorPatchDateString = strftime(PriorPatchDate, "%D") | eval daysCountD= strftime(PatchDate - now(), "%d") | table daysCountD PriorPatchDateString PatchDateString
I am trying to run the following search: index=tripwire LogCategory="Audit Event" AND "/etc/pki/rpm-gpg/RPM-GPG-KEY-shibboleth-7" AND "myserver.mydomain.com" | rex max_match=0 field=_raw "(?<li... See more...
I am trying to run the following search: index=tripwire LogCategory="Audit Event" AND "/etc/pki/rpm-gpg/RPM-GPG-KEY-shibboleth-7" AND "myserver.mydomain.com" | rex max_match=0 field=_raw "(?<lineData>[^\n]+)" | rex field=Msg "'(?<FilePath>.*)' accessed by" | rex field=_raw "accessed\sby\s'(?<Audit_UserName>.*)'.\sType" | table _time, FilePath, Audit_UserName However, the way I am splitting the multiline data doesn't appear to be working with this data. Here is a sample of the data as viewed in Notepad++ with symbols; Every line ends in CR LF  However, in Splunk it isn't splitting up the events.  What am I missing here?  I have had this work with similar data but unsure what is different in this situation. TIA!
HOw to retrieve NPA and NXX from CNAC.ca using splunk query. 
When a lookup is updated via | outputlookup, does that change the modified time?  For example - search for a lookup or kvstore name and see the SPL that gives overall usage, then have the option to ... See more...
When a lookup is updated via | outputlookup, does that change the modified time?  For example - search for a lookup or kvstore name and see the SPL that gives overall usage, then have the option to filter to only those SPL searches that have an outputlookup that modify the file. index=abc sourcetype=xyz | stats count | outputlookup append=true newlookup.csv How can i track whether outputlokkup file is updated or not using _internal or _audit index. Pleae suggest the splunk query to get the status 
Thanks Kiran, i will test it . 
Using the DECRYPT2 app, I have a search that uses the decrypt command to decode a encoded string. It returns results as a table just fine. I created a email alert using this search but the email aler... See more...
Using the DECRYPT2 app, I have a search that uses the decrypt command to decode a encoded string. It returns results as a table just fine. I created a email alert using this search but the email alert fails to trigger unless I remove the decrypted field from the table.  I would like the email alert to be sent which includes the decoded value from the decrypted field. Anyone might know what the issue is? 
I have a search query S1 which gives me a result  X with numeric values (203, 204, 205) and  I am able to populate this in a table. Now I want to bind this number with a url to create a dynamic url s... See more...
I have a search query S1 which gives me a result  X with numeric values (203, 204, 205) and  I am able to populate this in a table. Now I want to bind this number with a url to create a dynamic url specific to that number. For example URL is https://cnac.ca/data/json/npas.json  and from S1 query i get the result value as 203, I want to create URL : https://cnac.ca/data/json/npa203.json and want to hit this to get the response back to get the required data.  For this I am using curl command, but when I try to create a dynamic URL and pass it in curl command it doesn't accept it. Please suggest.
Hello, How to use specific start date in weekly timechart? For example: I have a set of Grade (Math, English, Science) data for Student1 and Student2 from 2/8/2024  to 3/1/2024 When I use time... See more...
Hello, How to use specific start date in weekly timechart? For example: I have a set of Grade (Math, English, Science) data for Student1 and Student2 from 2/8/2024  to 3/1/2024 When I use timechart weekly, it always starts with 02/08/2024. | timechart span=1w first(MathGrade) by Student useother=f limit=0 How do I start from other date, such as 02/09/2024 or 02/10/2024? Thank you for your help Here's the search   | makeresults format=csv data="_time,Student,MathGrade,EnglishGrade,ScienceGrade 1707368400,Student1,10,10,10 1707454800,Student1,9,9,9 1707541200,Student1,8,8,8 1707627600,Student1,7,7,7 1707714000,Student1,6,6,6 1707800400,Student1,5,5,5 1707886800,Student1,6,6,6 1707973200,Student1,7,7,7 1708059600,Student1,8,8,8 1708146000,Student1,9,9,9 1708232400,Student1,10,10,10 1708318800,Student1,10,10,10 1708405200,Student1,9,9,9 1708491600,Student1,8,8,8 1708578000,Student1,7,7,7 1708664400,Student1,6,6,6 1708750800,Student1,5,5,5 1708837200,Student1,6,6,6 1708923600,Student1,7,7,7 1709010000,Student1,8,8,8 1709096400,Student1,9,9,9 1709182800,Student1,10,10,10 1709269200,Student1,10,10,10 1707368400,Student2,9,9,9 1707454800,Student2,5,5,5 1707541200,Student2,6,6,6 1707627600,Student2,7,7,7 1707714000,Student2,8,8,8 1707800400,Student2,9,9,9 1707886800,Student2,5,5,5 1707973200,Student2,6,6,6 1708059600,Student2,7,7,7 1708146000,Student2,8,8,8 1708232400,Student2,9,9,9 1708318800,Student2,9,9,9 1708405200,Student2,5,5,5 1708491600,Student2,6,6,6 1708578000,Student2,7,7,7 1708664400,Student2,8,8,8 1708750800,Student2,9,9,9 1708837200,Student2,5,5,5 1708923600,Student2,6,6,6 1709010000,Student2,7,7,7 1709096400,Student2,8,8,8 1709182800,Student2,9,9,9 1709269200,Student2,9,9,9" | table _time, Student, MathGrade, EnglishGrade, ScienceGrade | timechart span=1w first(MathGrade) by Student useother=f limit=0      
Here is my current rex command -      EventCode=1004 | rex field=_raw "Files: (?<Media_Source>.+?\.txt)" | table Media_Source       My source data looks like this -     Files: C:\ProgramDa... See more...
Here is my current rex command -      EventCode=1004 | rex field=_raw "Files: (?<Media_Source>.+?\.txt)" | table Media_Source       My source data looks like this -     Files: C:\ProgramData\Roxio Log Files\Test.test_user_20240305122549.txt SHA1: 73b710056457bd9bda5fee22bb2a2ada8aa9f3e0       My current rex result is -  C:\ProgramData\Roxio Log Files\Test.test_user_20240305122549.txt How do I make it - Test.test_user_20240305122549.txt Im trying to drop - C:\ProgramData\Roxio Log Files\
@dicksola Hello, You can combine both the queries and create one single dashboard. Use lookup and map to add your CSV dataset to Splunk fields. You may view the fields and values of your dataset in ... See more...
@dicksola Hello, You can combine both the queries and create one single dashboard. Use lookup and map to add your CSV dataset to Splunk fields. You may view the fields and values of your dataset in Splunk after uploading the CSV file. You can now write your query 2 using the |inputlookup or |lookup commands. Subsearch or the append command can then be used to combine your query 2 with your query 1 (index=test* "users").   https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Append  https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Lookup  https://docs.splunk.com/Documentation/Splunk/9.2.0/Knowledge/Aboutlookupsandfieldactions   
Hi, Been trying to connect/join two log sources which have fields that share the same values. To break it down: source_1 field_A, field_D, and field_E source_2 field_B, and field_C f... See more...
Hi, Been trying to connect/join two log sources which have fields that share the same values. To break it down: source_1 field_A, field_D, and field_E source_2 field_B, and field_C field_a and field_b can share same value. field_c can correspond to multiple values of field_A/field_B. The query should essentially add field_c from source_2 to every filtered event in source_1 (like a left join, with source_2 almost functioning as a lookup table). I've gotten pretty close with my Join query, but it's a bit slow and not populating all the field_c's. Inspecting the job reveals I'm hitting 50000 result limit. I've also tried a stew query using stats, which is much faster, but it's not actually connecting the events / data together. Here are the queries I've been using so far: join   index=index_1 sourcetype=source_1 field_D="Device" field_E=*Down* OR field_E=*Up* | rename field_A as field_B | join type=left max=0 field_B [ search source="source_2" earliest=-30d@d latest=@m] | table field_D field_E field_B field_C   stats w/ coalesce()   index=index_1 (sourcetype=source_1 field_D="Device" field_E=*Down* OR field_E=*Up*) OR (source="source_2" earliest=-30d@d latest=@m) | eval field_AB=coalesce(field_A, field_B) | fields field_D field_E field_AB field_C | stats values(*) as * by field_AB     expected output field_D field_E field_A/field_B field_C fun_text Up/Down_text shared_value corresponding_value  
@GHk62 Refer this documentation  Start or stop the universal forwarder - Splunk Documentation  If you want to reset your admin password: Solved: How to Reset the Admin password? - Splunk Community
I have old searchheads that were removed via "splunk remove shcluster-member" command.  They rightfully do not show when I run "splunk show shcluster-status", however when I run  "splunk show kvstore... See more...
I have old searchheads that were removed via "splunk remove shcluster-member" command.  They rightfully do not show when I run "splunk show shcluster-status", however when I run  "splunk show kvstore-status" all the removed searchheads still show in this listing.  How do I get them removed from the kvstore clustering as well?
@RyanPrice The stanza which you've added monitors the log files under ///var/www/.../storage/logs/laravel*.log . If these logs are large or frequently updated, it could contribute to increased memory... See more...
@RyanPrice The stanza which you've added monitors the log files under ///var/www/.../storage/logs/laravel*.log . If these logs are large or frequently updated, it could contribute to increased memory usage.  verify if you have disabled THP. refer the splunk doc on it https://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes/SplunkandTHP  please check the limits.conf https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf  [thruput] maxKBps = [thruput] maxKBps = <integer> * The maximum speed, in kilobytes per second, that incoming data is processed through the thruput processor in the ingestion pipeline. * To control the CPU load while indexing, use this setting to throttle the number of events this indexer processes to the rate (in kilobytes per second) that you specify. * NOTE: * There is no guarantee that the thruput processor will always process less than the number of kilobytes per second that you specify with this setting. The status of earlier processing queues in the pipeline can cause temporary bursts of network activity that exceed what is configured in the setting. * The setting does not limit the amount of data that is written to the network from the tcpoutput processor, such as what happens when a universal forwarder sends data to an indexer. * The thruput processor applies the 'maxKBps' setting for each ingestion pipeline. If you configure multiple ingestion pipelines, the processor multiplies the 'maxKBps' value by the number of ingestion pipelines that you have configured. * For more information about multiple ingestion pipelines, see the 'parallelIngestionPipelines' setting in the server.conf.spec file. * Default (Splunk Enterprise): 0 (unlimited) * Default (Splunk Universal Forwarder): 256 the default value here is 256, you might consider increasing it if this is the actual reason for the data getting piled up, you can st the integer value to "0" which means unlimited.   Universal or Heavy, that is the question? | Splunk  Splunk Universal Forwarder | Splunk