All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Muthu_Vinith , did you tested my approach? Ciao. Giuseppe
hi @parthiban , do you only that to be sure that the correlation_ID of the first search contains only correlation_IDs of the second you can use a subsearc: index = Test1 invoked_component="XXXX" "... See more...
hi @parthiban , do you only that to be sure that the correlation_ID of the first search contains only correlation_IDs of the second you can use a subsearc: index = Test1 invoked_component="XXXX" "genesys" correlation_id="*" message="Successfully received" [ search index = Test2 invoked_component="YYYY" correlation_id="*" message IN ("Successfully created" , "Successfully updated") | dedup correlation_id | fields correlation_id ] | stats count by correlation_id this method work only if you have less than 50,000 results in the subsearch, otherwise you have to try something like this: (index = Test1 invoked_component="XXXX" "genesys" correlation_id="*" message="Successfully received") OR (index = Test2 invoked_component="YYYY" correlation_id="*" message IN ("Successfully created" , "Successfully updated")) | stats dc(index) AS index_count count by correlation_id | where index_count=2 Ciao. Giuseppe
hello everyone i check in log maxmind tracker get this error "Could not download MaxMind GeoIP MD5, exiting." how can i solve this ?    thankyou
Dear team, I need to join the two-index search and print the common ID's count. The below mentioned two different index it work independently, both the index having same correlation_ID but different... See more...
Dear team, I need to join the two-index search and print the common ID's count. The below mentioned two different index it work independently, both the index having same correlation_ID but different messages. So common correlation ID count for the both index need to print. index = Test1  invoked_component="XXXX" "genesys" correlation_id="*" message="Successfully received" | stats count by correlation_id index = Test2  invoked_component="YYYY" correlation_id="*" | where message IN ("Successfully created" , "Successfully updated") | stats count by correlation_id
i checked error this ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: xxxxxxxx java.lang.NoClassDefFoundError: Could not initialize class java.awt.GraphicsEnviron... See more...
i checked error this ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: xxxxxxxx java.lang.NoClassDefFoundError: Could not initialize class java.awt.GraphicsEnvironment   how i can solve this?
Is this even possible?! Any help will be appreciated. I need to search for specific text in a Windows host name that is located, by naming convention, after a 4, 5 or 6 character campus site code. T... See more...
Is this even possible?! Any help will be appreciated. I need to search for specific text in a Windows host name that is located, by naming convention, after a 4, 5 or 6 character campus site code. The specific text identifies the function of the host (e.g., print server, database server, domain controller, etc.). For example (these host names are simplified to illustrate the problem): 1.)    host=L004PS4bldDC7, the campus site code is “L004” and the function code is “PS” 2.)    host= L0005DB5bldPS, the campus site code is “L0005” and the function code is “DB” 3.)    host=L00006DC6rDB1, the campus site code is “L00006” and the function code is “DC” The data I’m searching through has 200+ campus site codes, each of which can be 4, 5 or 6 characters and each search will return 1000+ events. We are using a lookup to identify the campus site attribute from the host name. Using the same process doesn’t work for the function code. The characters following the function code are determined by the campus site admins and used to identify the physical location of each host on their campus (building name or room number). These physical location codes sometimes contain characters that match a function code required by the naming convention. For instance, if I search for events or metrics from print servers using *PS*, I also get them from non-print servers like host #2 above.
There is a flag you can give to tstats - chunk_size - see the docs here https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/tstats It talks about high cardinality distinct counts - y... See more...
There is a flag you can give to tstats - chunk_size - see the docs here https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/tstats It talks about high cardinality distinct counts - you could experiment to see if that makes a difference  
Try something like this for 2 index=newdata sourcetype=oracle source="/u0/DATA_COUNT.txt" loglevel="ERROR" [| makeresults | addinfo | eval earliest=relative_time(info_max_time,"-5m") | eval latest=i... See more...
Try something like this for 2 index=newdata sourcetype=oracle source="/u0/DATA_COUNT.txt" loglevel="ERROR" [| makeresults | addinfo | eval earliest=relative_time(info_max_time,"-5m") | eval latest=info_max_time | table earliest latest] | stats dc(loglevel) by INSTANCE_NAME
| rex "output,1.*?(?<output1>\d+\s+rows)" | rex "output,6.*?(?<output6>\d+\s+records)"
I created a manual correlation search with the below SPL --> the action is notable creation splunk_server=* index=* host=x.x.x.x "login" | stats count by src_ip | where count > 3 after that i can... See more...
I created a manual correlation search with the below SPL --> the action is notable creation splunk_server=* index=* host=x.x.x.x "login" | stats count by src_ip | where count > 3 after that i can see the notable created from the search tab index=notable but still the incident review has no values any hints guys?
How to create a detection rule on the LLMNR with sysmon or wineventlog, im kinda new to splunk
Hello Splunkers,    I wanted to extract  output1 and output6 fields from raw event Example Event1: Message : output,1: The guess/tmp/var/tms/bmp_abcd/apm_salesforce/address_standardplot/service... See more...
Hello Splunkers,    I wanted to extract  output1 and output6 fields from raw event Example Event1: Message : output,1: The guess/tmp/var/tms/bmp_abcd/apm_salesforce/address_standardplot/serviceinput/AddressStandardiplot_S3_VariousDmsJob_V9_apm_unmatch_AVI-pct-STANDARD_123456789_9912333333-f12f-5cb9-aa10-9d101188ad47.banana.2 file, which contains 456 rows, was written to the standardplot-s3-abc-dev-005 bucket. Example Event 2 Message : output,6: Input 0 consumed 123 records. desired result output1=456 rows output6=123 records Message field is also not auto extracted by Splunk. May need to use |rex field=_raw........ Please Advise  
I need to be able to perform a search in Splunk for a message ID and identify all the users that received it. We currently have a SOAR playbook that uses the Microsoft EWS API, but that has been depr... See more...
I need to be able to perform a search in Splunk for a message ID and identify all the users that received it. We currently have a SOAR playbook that uses the Microsoft EWS API, but that has been depreciated. As far as I know, Graph API (the replacement) does not have an end point for a full message trace. Does anyone have a better alternative?
As far as I could tell, it was some type of silent memory or data limit. I got away with using estdc() instead, since 100% accuracy wasn't required for my use case. You can try limiting the time fra... See more...
As far as I could tell, it was some type of silent memory or data limit. I got away with using estdc() instead, since 100% accuracy wasn't required for my use case. You can try limiting the time frame or amount of events and see where it starts breaking with dc(). I'm not sure how to fix the issue. Maybe a config limit or just more memory on the server.
If you're confident the sampling done for metrics.log will catch all of your UFs then the search looks good.
Did you end up finding a solution to this? I have a similar care and there is no info anywhere on the matter..
I’m not sure if there was a DMC or was this before it? If it was already published maybe there was Licensing views where you could try to see what sourc/host/sourcetype was cathode bursts? Another op... See more...
I’m not sure if there was a DMC or was this before it? If it was already published maybe there was Licensing views where you could try to see what sourc/host/sourcetype was cathode bursts? Another option was try to find SoS app which (maybe) could show this to you? And last option is try to look if this information has stored to _internal index? Worst case is that you must write your own report to check events’ lengths and calculate summaries based on that.
Hi, we have multiple services that we want to have filtered out from the journald. Is there a way to do the opposite of this stanza parameter? to exclude _SYSTEMD_UNIT=my.service journalctl-filt... See more...
Hi, we have multiple services that we want to have filtered out from the journald. Is there a way to do the opposite of this stanza parameter? to exclude _SYSTEMD_UNIT=my.service journalctl-filter =_SYSTEMD_UNIT=my.service    If that's not possible, what's the best way to do that?
Hey I've been working on a distributed Splunk environment, where in one of our indexes we have a very high cardinality "source" field (basically different for each event). I've noticed that using t... See more...
Hey I've been working on a distributed Splunk environment, where in one of our indexes we have a very high cardinality "source" field (basically different for each event). I've noticed that using tstats 'distinct_count' to count the number of sources, I am getting an incorrect result (far from one per event). The query looks something like: |tstats dc(source) where index=my_index   I've noticed that when I search on a smaller number of events (~100,000 instead of ~5,000,000), the result is correct. In addition, when using estdc I get a better result than dc (which is wildly wrong). Finally, when using stats instead of tstats, I get the correct value: index=my_index | stats dc(source)   Any ideas? My guess is that I'm hitting some memory barrier, but there is no indication of this.
      index=_internal source=*license_usage.log earliest=-1d@d latest=now [search index=_internal source=*metrics.log fwdType=uf earliest=-1d@d latest=now | rename hostname as h | fields h] | sta... See more...
      index=_internal source=*license_usage.log earliest=-1d@d latest=now [search index=_internal source=*metrics.log fwdType=uf earliest=-1d@d latest=now | rename hostname as h | fields h] | stats sum(b) as total_usage_bytes by h | eval total_usage_gb = round(total_usage_bytes/1024/1024/1024, 2) | fields - total_usage_bytes | addcoltotals label="Total" labelfield="h" total_usage_gb   I think this is what I wanted, unless someone thinks its inaccurate?  Please advise. TY