All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, We have the universal forwarder running on many machines.  In general, the memory usage is 200MB and below.  However, when adding the below stanza to inputs.conf it balloons to around 3000MB ... See more...
Hello, We have the universal forwarder running on many machines.  In general, the memory usage is 200MB and below.  However, when adding the below stanza to inputs.conf it balloons to around 3000MB (3GB) on servers where the /var/www file path contains some content. [monitor:///var/www/.../storage/logs/laravel*.log] index = lh-linux sourcetype = laravel_log disabled = 0   These logs are not plentiful or especially active, so I'm confused why the large spike in memory usage.  There would only be a handful of logs and they'd be updated infrequently yet the memory spike happens anyway.  I've tried to be as specific with the filepath as I can (I still need the wildcard directory path) but that doesn't seem to bring any better performance. There may be a lot of files in that path but only a handful that actually match the monitor stanza criteria.  Any suggestions on what can be done?  Thanks in advance. 
Hi Rich,  Thank you for showing me what I need to do to get slurm data into Splunk. --Tuesday Armijo
  Hello, I need help with perfecting a sourcetype that doesn't index my json files correctly when I am defining multiple capture groups within the LINE_BREAKER parameter. I'm using this other quest... See more...
  Hello, I need help with perfecting a sourcetype that doesn't index my json files correctly when I am defining multiple capture groups within the LINE_BREAKER parameter. I'm using this other questionto try to figure out how to make it work: https://community.splunk.com/t5/Getting-Data-In/How-to-handle-LINE-BREAKER-regex-for-multiple-capture-groups/m-p/291996  In my case my json looks like this [{"Field 1": "Value 1", "Field N": "Value N"}, {"Field 1": "Value 1", "Field N": "Value N"}, {"Field 1": "Value 1", "Field N": "Value N"}] Initially I tried: LINE_BREAKER = }(,\s){ Which split the events with the exception of the first and last records which were not indexed correctly due to the "[" or "]" characters leading and trailing the payload. After many attempts I have been unable to make it work, but based on what I've read this seems to be the most intuitive solution for defining the capture groups: LINE_BREAKER = ^([){|}(,\s){|}(])$ It doesn't work, but rather indexes the entire payload as one event, formatted correctly, but unusable. Could somebody please suggest how to correctly define the LINE_BREAKER parameter for the sourcetype?  Here is the full version I'm using: [area:prd:json] SHOULD_LINEMERGE = false TRUNCATE = 8388608 TIME_PREFIX = \"Updated\sdate\"\:\s\" TIME_FORMAT = %Y-%m-%d %H:%M:%S TZ = Europe/Paris MAX_TIMESTAMP_LOOKAHEAD = -1 KV_MODE = json LINE_BREAKER = ^([){|}(,\s){|}(])$ Other resolutions to my problem are welcome as well! Best regards, Andrew
I just tested this and it works perfectly.   I tweaked a few names and combined the file contents from @kiran_panchavat with the regex from @PickleRick and I'm good to go.  Thanks guys! props.conf... See more...
I just tested this and it works perfectly.   I tweaked a few names and combined the file contents from @kiran_panchavat with the regex from @PickleRick and I'm good to go.  Thanks guys! props.conf [source::auditd] TRANSFORMS-set=setnull transforms.conf [setnull] REGEX = acct=appuser.*exe=/usr/(sbin/crond|bin/crontab) DEST_KEY = queue FORMAT = nullQueue
Please can you provide a more detailed example of the issue you are facing (anonymising sensitive information such as ip addresses etc.)
Your expression is matching on at least 1 word character or non-word character i.e. almost anything, it then reduces back to the fewest characters match this, i.e. 1 character, so each instance of x ... See more...
Your expression is matching on at least 1 word character or non-word character i.e. almost anything, it then reduces back to the fewest characters match this, i.e. 1 character, so each instance of x is now a single character. This is why you are blowing the max_match limit. Try either including a trailing anchor pattern (and removing the ?), or improving the matching pattern.
Thank you @tscroggins 
Server C is SUSE 15
I am trying to make a curl request to a direct json link and fetch the result. When i hardcode the URL it works fine but my url is dynamic and gets created based on the search result of another query... See more...
I am trying to make a curl request to a direct json link and fetch the result. When i hardcode the URL it works fine but my url is dynamic and gets created based on the search result of another query. I can see my curl command is correct but it doesnt give proper output
1) The limits.conf file is configured by your administrator: https://docs.splunk.com/Documentation/Splunk/9.2.0/admin/limitsconf#.5Brex.5D 2) When I search for similar questions to yours. I find som... See more...
1) The limits.conf file is configured by your administrator: https://docs.splunk.com/Documentation/Splunk/9.2.0/admin/limitsconf#.5Brex.5D 2) When I search for similar questions to yours. I find some possible answers to your problem: https://community.splunk.com/t5/Splunk-Search/Rex-has-exceeded-configured-match-limit/m-p/391837 https://community.splunk.com/t5/Splunk-Search/Regex-error-exceeded-configured-match-limit/m-p/469890 https://community.splunk.com/t5/Splunk-Search/Error-has-exceeded-configured-match-limit/m-p/539725 3) You'll notice in these other answers, that the questions supply a log sample and their query to show what the rex is working against. Only do this if the event information is not sensitive. But without that information, it'll be difficult for the community to help you. That's why I'm supplying you with some other information too.
Have you look this https://splunkbase.splunk.com/app/1924 ?
Hello @bowesmana  Q: Are you collecting a _raw field or are you collecting fields without _raw? >> I am not sure what you meant, my  understanding _raw is the one that got pushed to index=summary ... See more...
Hello @bowesmana  Q: Are you collecting a _raw field or are you collecting fields without _raw? >> I am not sure what you meant, my  understanding _raw is the one that got pushed to index=summary Q: Are you specifying an index to collect to?   What's your collect command? I figured out why collect command didn't push the data. I put the wrong index name.  I stroke the incorrect name below | collect   index= summary     summary_test_1    testmode=false    file=summary_test_1.stash_new   name=summary_test_1"   marker="report=\"summary_test_1\"" Q:Are you running this as an ad-hoc search or as a scheduled saved search? I ran this as an ad-hoc search as a proof of concept using the past time Once it's working I will use a scheduled saved search for future time I added your suggestion to my search below and it worked, although I don't completely understand how.   Note that addtime=true/false didn't make any difference I appreciate your help.  Thank you.   If you have an easier way, please suggest     index=original_index ``` Query ``` | addinfo | eval _raw=printf("_time=%d", info_max_time) | foreach "*" [| eval _raw=_raw.case(isnull('<<FIELD>>'),"", mvcount('<<FIELD>>')>1,", <<FIELD>>=\"".mvjoin('<<FIELD>>',"###")."\"", true(), ", <<FIELD>>=\"".'<<FIELD>>'."\"") | fields - "<<FIELD>>" ] table ID, name, address | collect index= summary testmode=false addtime=true file=summary_test_1.stash_new name=summary_test_1" marker="report=\"summary_test_1\""  
@PickleRick & @kiran_panchavat , thank you guys so much for the assist.  I really appreciate it.  I'll give it a test and see if it works for me.  Thanks agaion!
Hi @dhirendra761, sorry no: the only ways to remove part of events are TRUNCATE or SEDCMD or transforms. You can also remove the full event before indexing. Ciao. Giuseppe
Hi @gcusello , Thank you for responce. In fact, file content are mixed-syntax. some of lines are json format and log-info-type format.   2024-02-08 | 23.118 | <hostname> | DEBUG | QueryForSuccess ... See more...
Hi @gcusello , Thank you for responce. In fact, file content are mixed-syntax. some of lines are json format and log-info-type format.   2024-02-08 | 23.118 | <hostname> | DEBUG | QueryForSuccess    we run the specify content with different search string . I agree defining SEDCMD is not easy. Any other way where we can prevent unused data and indexed only wanted data.?
Folks, I'm new to Splunk but learning. However I've been stuck and i need help with a simple query and Dash board i think.  1. Im able to create a simple xml query with a dashboard that list a numbe... See more...
Folks, I'm new to Splunk but learning. However I've been stuck and i need help with a simple query and Dash board i think.  1. Im able to create a simple xml query with a dashboard that list a number of users doing what from an indexed log file. Works fine. Example server.log  and query sample index=Test* "`Users`"  2. I have one dataset csv file that contain server names and cluster that uploaded into my space.   Now how do i combine the query and create a dashboard from my dataset file and server log.  That will include the User info in the indexed server logs and include the server and cluster info in the dataset csv file.  Please advice  
HI, I need to know how to set and where the value of allow_skew for the Enterprise Security app, as I have many alerts triggering every 5 minutes. thank you.
Hi @dhirendra761, it's possible to truncate a log event defining the lenght of each event, but, having a json format, in this way you loose the json format and the choice to use spath command to ext... See more...
Hi @dhirendra761, it's possible to truncate a log event defining the lenght of each event, but, having a json format, in this way you loose the json format and the choice to use spath command to extract fields, so you have to manually extract all the fields, so I hint to avoid. Maybe (I'm not sure) it's possible to identify a part of the log event that can be removed (using the SEDCMD command in props.conf) maintaining the json structure, but it isn't so easy!  Ciao. Giuseppe
I'm collecting papercut logs from a window server. [monitor://C:\Program Files\PaperCut MF\server\logs\print-logs\printlog_*.log] disable=false the output and index are applied via a deployment se... See more...
I'm collecting papercut logs from a window server. [monitor://C:\Program Files\PaperCut MF\server\logs\print-logs\printlog_*.log] disable=false the output and index are applied via a deployment server. searching with index=* host=<hostname> splunkforwarder service account has read on the folder and children.  
Hi Splunker ~~ I try to set up a markdown in text , and use the gui to modify the color or change the layer up or down.  but, no effect  ,only change the json content , then works. the interesting... See more...
Hi Splunker ~~ I try to set up a markdown in text , and use the gui to modify the color or change the layer up or down.  but, no effect  ,only change the json content , then works. the interesting thing , Same version on the other server , it's works.  any suggestion ??   any expert can help