All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @isoutamo missing double quotes parsing failing? looks like a bug to me. We had an old similar type bug sometime back on Splunk version6 .
Hello @yuanliu . I noticed an issue when I cloned my classic dashboards to Studio. Is this happening with brand new dashboards created in Studio? My 9.2.2 (Cloud) version has been verified to have th... See more...
Hello @yuanliu . I noticed an issue when I cloned my classic dashboards to Studio. Is this happening with brand new dashboards created in Studio? My 9.2.2 (Cloud) version has been verified to have the same issue when cloning from Classic to Studio. However, there are no issues with the original Studio dashboards.  
I have many Dashboard Studio dashboards by now.  Those created earlier function normally.  But starting a few days ago, new dashboard created cannot use "Open in Search" function any more; the magnif... See more...
I have many Dashboard Studio dashboards by now.  Those created earlier function normally.  But starting a few days ago, new dashboard created cannot use "Open in Search" function any more; the magnifying glass icon is greyed out.  "Show Open In Search Button" is checked.  Any insight?  My servers is 9.2.2 and there has been no change on server side.
I believe this feature is for UF only, code changes are made only for UF.
I'm having similar query but when using the below case .. onlt <=500 and >=1500 are getting counted in stats  duration =6000's are also getting counted in >1500ms case not sure why index=* | r... See more...
I'm having similar query but when using the below case .. onlt <=500 and >=1500 are getting counted in stats  duration =6000's are also getting counted in >1500ms case not sure why index=* | rex "TxDurationInMillis=(?<TxDurationInMillis>\d+)" | eval ResponseTime = tonumber(TxDurationInMillis) | eval ResponseTimeCase=case( ResponseTime <= 500, "<=500ms", ResponseTime > 500 AND ResponseTime <= 1000, ">500ms and <=1000ms", ResponseTime > 1000 AND ResponseTime <= 1400, ">1000ms and <=1400ms", ResponseTime > 1400 AND ResponseTime < 1500, ">1400ms and <1500ms", ResponseTime >= 1500, ">=1500ms" ) | table TxDurationInMillis ResponseTime ResponseTimeCase
You can apply EVENT BREAKER  settings on your props.conf.  Go to your app/local directory on your Deployment server. Create or edit props.conf file.  Update the EVENT_BREAKER with the appropr... See more...
You can apply EVENT BREAKER  settings on your props.conf.  Go to your app/local directory on your Deployment server. Create or edit props.conf file.  Update the EVENT_BREAKER with the appropriate regex pattern for your source. Typically, this is the same as your LINE_BREAKER regex. Reload the serverclass app on the Deployment server. Verify that the updated props.conf is successfully deployed to the Universal Forwarder. That should complete the setup.     Refer: https://community.splunk.com/t5/Getting-Data-In/How-to-apply-EVENT-BREAKER-on-UF-for-better-data-distribution/m-p/614423 Hope this helps.
Hi @Gregory.Burkhead, Have you had a chance to check out the reply from @Mario.Morelli? We're you able to find a solution that you could share or do you still need help?
Hi @Husnain.Ashfaq, Were you able to check out and try what was suggested by @Mario.Morelli? 
Apart from very specific cases of systems which have constant memory requirements, there is no way of telling how much resources you will need especially not knowing your load, your data and so on. ... See more...
Apart from very specific cases of systems which have constant memory requirements, there is no way of telling how much resources you will need especially not knowing your load, your data and so on. Having said that - there are general sizing hints there https://docs.splunk.com/Documentation/Splunk/latest/Capacity/Referencehardware Additionally, IMHO indexers should not use swap. At all. If you have to reach to disk for "memory", that means you're slowing your I/O which means you're building up your queues which means you're using up even more memory. That's a downhill path. (ok, you can have a very small swap to keep some sleeping system daemons out of active RAM but that's usually not worth the bother).
Hard to say without knowing your exact data and config. But Splunk does tend to try to guess  the time format sometimes and it's usually not the best idea to let it. So if you don't have timestamps i... See more...
Hard to say without knowing your exact data and config. But Splunk does tend to try to guess  the time format sometimes and it's usually not the best idea to let it. So if you don't have timestamps in your data it's best to explicitly configure your sourcetype so that Splunk doesn't guess but blindly assumes it's the current timestamp (as @gcusello already showed)
There is an add-on for Palo Alto solutions. https://splunkbase.splunk.com/app/7523 It is Splunk-supported so it should have a pretty decent manual.
1. Are you properly providing password and oldpassword? 2. Just for the sake of clarity - you're trying to update a local user, right?
OK. It seems... overly complicated. As I understand it, you have customer who has DBConnect's inputs from which data is pulled from production databases, right? But no audit logs, right? And now you... See more...
OK. It seems... overly complicated. As I understand it, you have customer who has DBConnect's inputs from which data is pulled from production databases, right? But no audit logs, right? And now you want to pull the audit logs which are not gonna be sent to Splunk and send them away? That makes no sense. (also I'm not sure all databases actually store audit data in databases themselves; as far as I remember, MySQL logged audit events to normal flat text files; but I haven't worked with it for quite some time so I might be wrong here). Why not use something that will directly connect your source with your destination without forcing Splunk components into doing something they are not meant to do?
OK. Once again - did you "connect MySQL to Splunk using DB Connect" on the Universal Forwarder? How?
I have a question about breaking up a single line of data to send to the Splunk Indexer.  We sending data which can have over 50,000 characters on a single line.  I would like to know if there is a ... See more...
I have a question about breaking up a single line of data to send to the Splunk Indexer.  We sending data which can have over 50,000 characters on a single line.  I would like to know if there is a way to break up the data on the source server with the universal forwarder before sending it to the indexer and then reassemble it after it arrives at the indexer.   We would like to know if this is possible rather than having to increase the Truncate size on the indexer to take all the data at once.  
You overcomplicate your case. <your initial search> will give you a list of printer activites. As a side note you didn't take into account the fact that there is a field called count. I assume it c... See more...
You overcomplicate your case. <your initial search> will give you a list of printer activites. As a side note you didn't take into account the fact that there is a field called count. I assume it can contain a value higher than 1. If it doesn't you can probably use count instead of sum later on. For the naming sake, we'll overwrite the format name | eval size_paper=if(size_paper="11x7","legal",size_paper) Now you can use the paper format to create additional fields based on the paper size value. | eval {size_paper}_jobs=jobs | eval {size_paper}_pages=pages Now you can just aggregate | stats sum(*_jobs) as *_jobs sum(*_pages) as *_pages sum(jobs) as overall_count sum(pages) as overall_pages by prnt_name And all that's left is enriching your results with your lookup contents | lookup printers_csv prnt_name OUTPUT location  
After looking over my initial post, thought I would clarify a little more as to what I am after here.  I am looking to get total print jobs that are "letter", total pages printed that are "letter" an... See more...
After looking over my initial post, thought I would clarify a little more as to what I am after here.  I am looking to get total print jobs that are "letter", total pages printed that are "letter" and total print jobs that are "11x17" (legal), total pages printed that are "11x17" in addition to my initial working query of sum of total print jobs and total pages printed logged by a specific printer  Thanks
Have working query to give me list of all printers, total job count, total page count and show location of printers using a lookup.  Sample Data, Lookup and query is:    Sample Data print logs from ... See more...
Have working query to give me list of all printers, total job count, total page count and show location of printers using a lookup.  Sample Data, Lookup and query is:    Sample Data print logs from index=printer prnt_name   jobs   pages_printed   size_paper CS001             1          5                               letter CS001             1         10                            11x17 CS002             1         20                            11x17 CS003             1         10                             letter CS003             1         15                            11x17 Lookup Data from printers.csv prnt_name   location CS001             office                CS002            dock                CS003            front                   Splunk Query index=printer    | stats count sum(pages_printed) AS tot_prnt_pgs by prnt_name,    | lookup printers.csv prnt_name AS prnt_name OUTPUT location    | stats sum(count) AS print_jobs by prnt_name    | table prnt_name, location, count,  tot_prnt_pgs Splunk Query Results prnt_name     location    count      tot_prnt_pgs  CS001               office         2               15                           CS002               dock           1               20                           CS003               front           2               25        I have been trying to use a (count (eval(if...))) clause but not sure how ot implement it or if that is the correct way to get the results I am after.  I have been using various arguments from other Splunk posts but can't seem to make it work.  Below is the output I am trying to get  Output looking for:  "ltr" represents letter and lgl represents 11x7.   prnt_name     location    count      tot_prnt_pgs    ltr_count    ltr_tot_pgs    lgl_count     lgl_tot pgs CS001               office         2               15                            1                    5                         1                      10 CS002               dock           1               20                            0                    0                         1                      20 CS003               front           2                25                           1                    10                       1                      15 Appreciate any time give on this.
Thanks for your help Giusepe. This is helpful for getting the duration. However, I would also like to table the results from filtering the events in sourcetypeA and having the duration. This solution... See more...
Thanks for your help Giusepe. This is helpful for getting the duration. However, I would also like to table the results from filtering the events in sourcetypeA and having the duration. This solution does not seem to merge the two resulting searches. ex. table _time computerName sessionID filteredInfoIWant1 filteredInfoIwant2 duration
Try messing with custom URL in a markup box but I would not hold out hope.