All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for the suggestion to append a space to the string..    I have tried : | eval new_field = existing_field + " " and : | eval new_field = existing_field + " "   both show it adjust... See more...
Thanks for the suggestion to append a space to the string..    I have tried : | eval new_field = existing_field + " " and : | eval new_field = existing_field + " "   both show it adjusted in the statistics page but not on the Dashboard.   
i wanted to omit data for non-business hours and weekends. i have tried this below query and not getting any results - added this portion -->  eval hour = tonumber(strftime(_time,"%H")) | eval dow =... See more...
i wanted to omit data for non-business hours and weekends. i have tried this below query and not getting any results - added this portion -->  eval hour = tonumber(strftime(_time,"%H")) | eval dow = tonumber(strftime(_time,"%w")) | where hour>=6 AND hour<=18 AND dow!=0 AND dow!=6 | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",sum(builtin:apps.web.actionCount.xhr.browser:parents) As "XHR_Count1",avg(builtin:apps.web.visuallyComplete.xhr.browser:parents) As "Avg_Xhr_Response1" where index=itsi_im_metrics AND source.name="DT_Prod_SaaS" AND entity.browser.name IN ("Desktop Browser","Mobile Browser") AND entity.application.name ="xxxxx" earliest=-31d@d latest=@d-1m by entity.application.name | eval hour = tonumber(strftime(_time,"%H")) | eval dow = tonumber(strftime(_time,"%w")) | where hour>=6 AND hour<=18 AND dow!=0 AND dow!=6 | eval Avg_Load_Response1=round((Avg_Load_Response1/1000),2),Avg_Xhr_Response1=round((Avg_Xhr_Response1/1000),2),Load_Count1=round(Load_Count1,0),XHR_Count1=round(XHR_Count1,0) | table entity.application.name,Avg_Load_Response1  
Hey PaulPanther   Sorry for the delayed response.  Yes this is for every user.
Currently have an active case open. Will gladly share the results when I get them!
Running queries on really large sets of data, and sending the output to an outputlookup works well for weekly refreshed dashboards. Is there a way to have some numbers from the initial report go into... See more...
Running queries on really large sets of data, and sending the output to an outputlookup works well for weekly refreshed dashboards. Is there a way to have some numbers from the initial report go into a separate, second outputlookup for monthly tracking?  For example a weekly report or dashboard shows me details on a daily basis, and the weekly summary - great.  Now the weekly summary should go additionally to a separate file for the monthly view. Is there a way to 'tee' results to different outputlookups? 
Numbers are usually aligned to the right, strings are aligned to the left. If the string contains only numbers, it may be aligned in a table panel to the right. To force it to remain as a string (and... See more...
Numbers are usually aligned to the right, strings are aligned to the left. If the string contains only numbers, it may be aligned in a table panel to the right. To force it to remain as a string (and be aligned to the left), you could append a space to the string.
Good day, I have a query that I would like to add more information onto. The query pulls all users that accessed a AI site and gives my data for weekdays as a 1 or 0 if the site was accessed. The que... See more...
Good day, I have a query that I would like to add more information onto. The query pulls all users that accessed a AI site and gives my data for weekdays as a 1 or 0 if the site was accessed. The query 1 gets a user from index db_it_network and I would like to add the department of each user by querying theindex=collect_identities sourcetype=ldap:query The users are displayed in the collect identities index as 'email' and their department in the bunit field    index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" OR app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | where date_wday="monday" OR date_wday="tuesday" OR date_wday="wednesday" OR date_wday="thursday" OR date_wday="friday" | eval app=if(url_domain="www.perplexity.ai", url_domain, app) | table user, app, date_wday | stats count by user app date_wday | chart count by user app | sort app 0      Note: the |stats | chart is necessary to distinct so that one user return results for one app per day
I can get a numeric table aligned to the left in the statistics field with the  | eval count=printf("%-10d",<your_field>)  However the alignment does not translate to the dashboard.     Any insight... See more...
I can get a numeric table aligned to the left in the statistics field with the  | eval count=printf("%-10d",<your_field>)  However the alignment does not translate to the dashboard.     Any insight on why this does work or if there is another way to align numeric results to the right on a dashboard for aesthetic purposes?
Unfortunately, in both cases doesn't work. I tried also to work with the raw logs on regex101 and I came up with this regex: EventCode=4634+[^$]+Security ID:\s+.*\$ But I still getting logs.
Okay, looks good. Could you please search in the summary index over all time? And please ensure you have access to the summary index.
OK. I'd try to verify whether the transform is called at all. I have a feeling that it is not for some reason. You can for testing create some "sure fire" transform and check if it is being applied.... See more...
OK. I'd try to verify whether the transform is called at all. I have a feeling that it is not for some reason. You can for testing create some "sure fire" transform and check if it is being applied. Are you sure you're doing it on the right component?
I tried this conf:   [remove_logoff] REGEX = "(?:EventCode=4634)" INGEST_EVAL = queue=if(match(_raw,"Security\sID:[\s]+.*\$"), "nullQueue", queue)   and also with REGEX = . But in both cases I'... See more...
I tried this conf:   [remove_logoff] REGEX = "(?:EventCode=4634)" INGEST_EVAL = queue=if(match(_raw,"Security\sID:[\s]+.*\$"), "nullQueue", queue)   and also with REGEX = . But in both cases I'm still getting logs.  
  Hi @PaulPanther this is screen shot after adding testmode=true  
AFAIR I had mixed results with transform not containing anything in the REGEX field. Try to explicitly add REGEX = . to match anything to the transform.  
Hi Zubair,   Try something like this: [YOUR_SOURCETYPE] SHOULD_LINEMERGE=true LINE_BREAKER=(, ) TRUNCATE=9999999 BREAK_ONLY_BEFORE={ MUST_BREAK_AFTER=} SEDCMD-cleanup-before=s/^\{ "User" : \[\s\{/... See more...
Hi Zubair,   Try something like this: [YOUR_SOURCETYPE] SHOULD_LINEMERGE=true LINE_BREAKER=(, ) TRUNCATE=9999999 BREAK_ONLY_BEFORE={ MUST_BREAK_AFTER=} SEDCMD-cleanup-before=s/^\{ "User" : \[\s\{/{/g SEDCMD-cleanup-after-2=s/\s\[\}/}/g It's best if you can run that on a test instance first with some sample data to see how it works for you.  
Sorry for not being clearer, however i need help with props attributes and regex to match event break
Yes, change of the data format can cause incompatibilities with earlier data. That's true. The issue with your data in general (possibly not in the presented example) is - as I said - that you have ... See more...
Yes, change of the data format can cause incompatibilities with earlier data. That's true. The issue with your data in general (possibly not in the presented example) is - as I said - that you have separate arrays which splunk can parse into separate multivalued fields which are not related to one another. If you are absolutely sure that both of those multivalued fields are of the same cardinality and are 1-1 related with one another you can try to do join them using the mvzip() function. Then do mvexpand and split those values back to get corresponding pairs. One caveat though - since the values get merged into a single value, if they contain the delimiter you choose for mvzipping, it's gonna get ugly when you'll be trying to split them again. So it's possible but pretty ugly (and working only with some strong assumptions.
Do you need help how to configure the props.conf or where to configure it?
Yes the files are getting fully overwritten and checked the input status and no issues found.
About the number of files - yes, I figured as much. It was suppose to be a little joke to lighten the mood a bit. Maybe a missed one. Nevermind. "What It Does: This setting includes the file's last ... See more...
About the number of files - yes, I figured as much. It was suppose to be a little joke to lighten the mood a bit. Maybe a missed one. Nevermind. "What It Does: This setting includes the file's last modification time in the checksum calculation." - No, it does not. It includes literal "DATETIME" string in CRC calculation (which doesn't change the situation much). The only possible "dynamic" setting specified in the spec file for inputs.conf is the <SOURCE> setting which is substituted with each file's path. Other than that, the strings are constant literals. Are the files updated or fully rewritten? As usual with any problems with ingesting files, the first debugging steps are to run splunk list monitor and splunk list inputstatus and see if there's something unusual about those files