All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@dicksola Hello, You can combine both the queries and create one single dashboard. Use lookup and map to add your CSV dataset to Splunk fields. You may view the fields and values of your dataset in ... See more...
@dicksola Hello, You can combine both the queries and create one single dashboard. Use lookup and map to add your CSV dataset to Splunk fields. You may view the fields and values of your dataset in Splunk after uploading the CSV file. You can now write your query 2 using the |inputlookup or |lookup commands. Subsearch or the append command can then be used to combine your query 2 with your query 1 (index=test* "users").   https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Append  https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Lookup  https://docs.splunk.com/Documentation/Splunk/9.2.0/Knowledge/Aboutlookupsandfieldactions   
Hi, Been trying to connect/join two log sources which have fields that share the same values. To break it down: source_1 field_A, field_D, and field_E source_2 field_B, and field_C f... See more...
Hi, Been trying to connect/join two log sources which have fields that share the same values. To break it down: source_1 field_A, field_D, and field_E source_2 field_B, and field_C field_a and field_b can share same value. field_c can correspond to multiple values of field_A/field_B. The query should essentially add field_c from source_2 to every filtered event in source_1 (like a left join, with source_2 almost functioning as a lookup table). I've gotten pretty close with my Join query, but it's a bit slow and not populating all the field_c's. Inspecting the job reveals I'm hitting 50000 result limit. I've also tried a stew query using stats, which is much faster, but it's not actually connecting the events / data together. Here are the queries I've been using so far: join   index=index_1 sourcetype=source_1 field_D="Device" field_E=*Down* OR field_E=*Up* | rename field_A as field_B | join type=left max=0 field_B [ search source="source_2" earliest=-30d@d latest=@m] | table field_D field_E field_B field_C   stats w/ coalesce()   index=index_1 (sourcetype=source_1 field_D="Device" field_E=*Down* OR field_E=*Up*) OR (source="source_2" earliest=-30d@d latest=@m) | eval field_AB=coalesce(field_A, field_B) | fields field_D field_E field_AB field_C | stats values(*) as * by field_AB     expected output field_D field_E field_A/field_B field_C fun_text Up/Down_text shared_value corresponding_value  
@GHk62 Refer this documentation  Start or stop the universal forwarder - Splunk Documentation  If you want to reset your admin password: Solved: How to Reset the Admin password? - Splunk Community
I have old searchheads that were removed via "splunk remove shcluster-member" command.  They rightfully do not show when I run "splunk show shcluster-status", however when I run  "splunk show kvstore... See more...
I have old searchheads that were removed via "splunk remove shcluster-member" command.  They rightfully do not show when I run "splunk show shcluster-status", however when I run  "splunk show kvstore-status" all the removed searchheads still show in this listing.  How do I get them removed from the kvstore clustering as well?
@RyanPrice The stanza which you've added monitors the log files under ///var/www/.../storage/logs/laravel*.log . If these logs are large or frequently updated, it could contribute to increased memory... See more...
@RyanPrice The stanza which you've added monitors the log files under ///var/www/.../storage/logs/laravel*.log . If these logs are large or frequently updated, it could contribute to increased memory usage.  verify if you have disabled THP. refer the splunk doc on it https://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes/SplunkandTHP  please check the limits.conf https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf  [thruput] maxKBps = [thruput] maxKBps = <integer> * The maximum speed, in kilobytes per second, that incoming data is processed through the thruput processor in the ingestion pipeline. * To control the CPU load while indexing, use this setting to throttle the number of events this indexer processes to the rate (in kilobytes per second) that you specify. * NOTE: * There is no guarantee that the thruput processor will always process less than the number of kilobytes per second that you specify with this setting. The status of earlier processing queues in the pipeline can cause temporary bursts of network activity that exceed what is configured in the setting. * The setting does not limit the amount of data that is written to the network from the tcpoutput processor, such as what happens when a universal forwarder sends data to an indexer. * The thruput processor applies the 'maxKBps' setting for each ingestion pipeline. If you configure multiple ingestion pipelines, the processor multiplies the 'maxKBps' value by the number of ingestion pipelines that you have configured. * For more information about multiple ingestion pipelines, see the 'parallelIngestionPipelines' setting in the server.conf.spec file. * Default (Splunk Enterprise): 0 (unlimited) * Default (Splunk Universal Forwarder): 256 the default value here is 256, you might consider increasing it if this is the actual reason for the data getting piled up, you can st the integer value to "0" which means unlimited.   Universal or Heavy, that is the question? | Splunk  Splunk Universal Forwarder | Splunk  
Hi When i have'd install the forwarder splunk in my host, he didn't ask me for the administrator username and password so when i start splunk and connect it to my splunk entreprise i can't enter Ide... See more...
Hi When i have'd install the forwarder splunk in my host, he didn't ask me for the administrator username and password so when i start splunk and connect it to my splunk entreprise i can't enter Identifier. And default password don't work. Thanks for your help.
Hello, We have the universal forwarder running on many machines.  In general, the memory usage is 200MB and below.  However, when adding the below stanza to inputs.conf it balloons to around 3000MB ... See more...
Hello, We have the universal forwarder running on many machines.  In general, the memory usage is 200MB and below.  However, when adding the below stanza to inputs.conf it balloons to around 3000MB (3GB) on servers where the /var/www file path contains some content. [monitor:///var/www/.../storage/logs/laravel*.log] index = lh-linux sourcetype = laravel_log disabled = 0   These logs are not plentiful or especially active, so I'm confused why the large spike in memory usage.  There would only be a handful of logs and they'd be updated infrequently yet the memory spike happens anyway.  I've tried to be as specific with the filepath as I can (I still need the wildcard directory path) but that doesn't seem to bring any better performance. There may be a lot of files in that path but only a handful that actually match the monitor stanza criteria.  Any suggestions on what can be done?  Thanks in advance. 
Hi Rich,  Thank you for showing me what I need to do to get slurm data into Splunk. --Tuesday Armijo
  Hello, I need help with perfecting a sourcetype that doesn't index my json files correctly when I am defining multiple capture groups within the LINE_BREAKER parameter. I'm using this other quest... See more...
  Hello, I need help with perfecting a sourcetype that doesn't index my json files correctly when I am defining multiple capture groups within the LINE_BREAKER parameter. I'm using this other questionto try to figure out how to make it work: https://community.splunk.com/t5/Getting-Data-In/How-to-handle-LINE-BREAKER-regex-for-multiple-capture-groups/m-p/291996  In my case my json looks like this [{"Field 1": "Value 1", "Field N": "Value N"}, {"Field 1": "Value 1", "Field N": "Value N"}, {"Field 1": "Value 1", "Field N": "Value N"}] Initially I tried: LINE_BREAKER = }(,\s){ Which split the events with the exception of the first and last records which were not indexed correctly due to the "[" or "]" characters leading and trailing the payload. After many attempts I have been unable to make it work, but based on what I've read this seems to be the most intuitive solution for defining the capture groups: LINE_BREAKER = ^([){|}(,\s){|}(])$ It doesn't work, but rather indexes the entire payload as one event, formatted correctly, but unusable. Could somebody please suggest how to correctly define the LINE_BREAKER parameter for the sourcetype?  Here is the full version I'm using: [area:prd:json] SHOULD_LINEMERGE = false TRUNCATE = 8388608 TIME_PREFIX = \"Updated\sdate\"\:\s\" TIME_FORMAT = %Y-%m-%d %H:%M:%S TZ = Europe/Paris MAX_TIMESTAMP_LOOKAHEAD = -1 KV_MODE = json LINE_BREAKER = ^([){|}(,\s){|}(])$ Other resolutions to my problem are welcome as well! Best regards, Andrew
I just tested this and it works perfectly.   I tweaked a few names and combined the file contents from @kiran_panchavat with the regex from @PickleRick and I'm good to go.  Thanks guys! props.conf... See more...
I just tested this and it works perfectly.   I tweaked a few names and combined the file contents from @kiran_panchavat with the regex from @PickleRick and I'm good to go.  Thanks guys! props.conf [source::auditd] TRANSFORMS-set=setnull transforms.conf [setnull] REGEX = acct=appuser.*exe=/usr/(sbin/crond|bin/crontab) DEST_KEY = queue FORMAT = nullQueue
Please can you provide a more detailed example of the issue you are facing (anonymising sensitive information such as ip addresses etc.)
Your expression is matching on at least 1 word character or non-word character i.e. almost anything, it then reduces back to the fewest characters match this, i.e. 1 character, so each instance of x ... See more...
Your expression is matching on at least 1 word character or non-word character i.e. almost anything, it then reduces back to the fewest characters match this, i.e. 1 character, so each instance of x is now a single character. This is why you are blowing the max_match limit. Try either including a trailing anchor pattern (and removing the ?), or improving the matching pattern.
Thank you @tscroggins 
Server C is SUSE 15
I am trying to make a curl request to a direct json link and fetch the result. When i hardcode the URL it works fine but my url is dynamic and gets created based on the search result of another query... See more...
I am trying to make a curl request to a direct json link and fetch the result. When i hardcode the URL it works fine but my url is dynamic and gets created based on the search result of another query. I can see my curl command is correct but it doesnt give proper output
1) The limits.conf file is configured by your administrator: https://docs.splunk.com/Documentation/Splunk/9.2.0/admin/limitsconf#.5Brex.5D 2) When I search for similar questions to yours. I find som... See more...
1) The limits.conf file is configured by your administrator: https://docs.splunk.com/Documentation/Splunk/9.2.0/admin/limitsconf#.5Brex.5D 2) When I search for similar questions to yours. I find some possible answers to your problem: https://community.splunk.com/t5/Splunk-Search/Rex-has-exceeded-configured-match-limit/m-p/391837 https://community.splunk.com/t5/Splunk-Search/Regex-error-exceeded-configured-match-limit/m-p/469890 https://community.splunk.com/t5/Splunk-Search/Error-has-exceeded-configured-match-limit/m-p/539725 3) You'll notice in these other answers, that the questions supply a log sample and their query to show what the rex is working against. Only do this if the event information is not sensitive. But without that information, it'll be difficult for the community to help you. That's why I'm supplying you with some other information too.
Have you look this https://splunkbase.splunk.com/app/1924 ?
Hello @bowesmana  Q: Are you collecting a _raw field or are you collecting fields without _raw? >> I am not sure what you meant, my  understanding _raw is the one that got pushed to index=summary ... See more...
Hello @bowesmana  Q: Are you collecting a _raw field or are you collecting fields without _raw? >> I am not sure what you meant, my  understanding _raw is the one that got pushed to index=summary Q: Are you specifying an index to collect to?   What's your collect command? I figured out why collect command didn't push the data. I put the wrong index name.  I stroke the incorrect name below | collect   index= summary     summary_test_1    testmode=false    file=summary_test_1.stash_new   name=summary_test_1"   marker="report=\"summary_test_1\"" Q:Are you running this as an ad-hoc search or as a scheduled saved search? I ran this as an ad-hoc search as a proof of concept using the past time Once it's working I will use a scheduled saved search for future time I added your suggestion to my search below and it worked, although I don't completely understand how.   Note that addtime=true/false didn't make any difference I appreciate your help.  Thank you.   If you have an easier way, please suggest     index=original_index ``` Query ``` | addinfo | eval _raw=printf("_time=%d", info_max_time) | foreach "*" [| eval _raw=_raw.case(isnull('<<FIELD>>'),"", mvcount('<<FIELD>>')>1,", <<FIELD>>=\"".mvjoin('<<FIELD>>',"###")."\"", true(), ", <<FIELD>>=\"".'<<FIELD>>'."\"") | fields - "<<FIELD>>" ] table ID, name, address | collect index= summary testmode=false addtime=true file=summary_test_1.stash_new name=summary_test_1" marker="report=\"summary_test_1\""  
@PickleRick & @kiran_panchavat , thank you guys so much for the assist.  I really appreciate it.  I'll give it a test and see if it works for me.  Thanks agaion!
Hi @dhirendra761, sorry no: the only ways to remove part of events are TRUNCATE or SEDCMD or transforms. You can also remove the full event before indexing. Ciao. Giuseppe