All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you @bowesmana . However I could not get the results with above one. Let me try to put the requirement with example again Search 1 :  index=_internal sourcetype=scheduler earliest=-1h@h lates... See more...
Thank you @bowesmana . However I could not get the results with above one. Let me try to put the requirement with example again Search 1 :  index=_internal sourcetype=scheduler earliest=-1h@h latest=now | stats latest(status) as FirstStatus by scheduled_time savedsearch_name | search NOT FirstStatus IN ("success","delegated_remote") This query will give result like below scheduled_time savedsearch_name FirstStatus 1712131500 ABC skipped Now I wanted to take the savedsearch_name ABC and the scheduled_time=1712131500 into next query and search like below index=_internal sourcetype=scheduler savedsearch_name="ABC" earliest=-1h@h latest=now | eval failed_time="1712131500" | eval range=if((failed_time>=durable_cursor AND failed_time<=scheduled_time),"COVERED","NOT COVERED") | where durable_cursor!=scheduled_time | table savedsearch_name durable_cursor scheduled_time range 04-03-2024 05:38:18.025 +0000 INFO SavedSplunker ... savedsearch_name="ABC", priority=default, status=success, durable_cursor=1712131400, scheduled_time=1712131600 Combining both into one search is fine. If not taking the values and passing into lookup and then refer later is also fine
Hello @PickleRick the other 4 points you mentioned was of no use, which is why it's not included in the answer. Couple of points -  1. The question author have mentioned - " I'm fairly new to Splun... See more...
Hello @PickleRick the other 4 points you mentioned was of no use, which is why it's not included in the answer. Couple of points -  1. The question author have mentioned - " I'm fairly new to Splunk and I'm still learning how to set things up so as many details as possible would be helpful." - Which is why the answer mentions about having everything at one place and monitor it later - which is usual practice. 2. Community answer initiate a "thread" where further discussion can be in place about what and how to achieve the solution 3. The question also mentions "I believe the process is to have the printers redirect their logs to the print server to a specific folder, then add that folder to the list of logs being reported in the Splunk forwarder. Does that sound correct?" - which is the one of the best way to monitor the logs from one place. 4. It's not copy-pasting answer, it's about taking a reference -> looking over authenticity -> Updating it as required and sharing with community. One could literary ask each and every Splunk Community question over GPT and paste the answers - but that's not being happened. We as a community wants to use new tools along with making sure whatever we are posting is authentic and actually helps the ones who posts here
#012 here is Line Feed character (\n) escaped by rsyslog (as well as #011 is an escaped \t). Question is why it's escaped. It would be easiest if the events were broken by rsyslog.
Hello! When I set up to collect Google Workspace's OAuth Token Event log using Google Workspace for Splunk, the following error occurs. The Credential is valid, so other logs (drive, login, etc.) a... See more...
Hello! When I set up to collect Google Workspace's OAuth Token Event log using Google Workspace for Splunk, the following error occurs. The Credential is valid, so other logs (drive, login, etc.) are being collected well. I would like to know the cause and solution.     error_message="'str' object has no attribute 'get'" error_type="&lt;class 'AttributeError'&gt;" error_arguments="'str' object has no attribute 'get'" error_filename="google_client.py" error_line_number="1242" input_guid="{input-guid-number}" input_name="token"   e.g.) google workspace OAuth Token Log  https://developers.google.com/admin-sdk/reports/v1/appendix/activity/token?hl=en
Migration of VMs between datastores is completely transparent for Splunk running inside that VM. So as @scelikok said - as long as you have enough performance on that datastore, you should be OK but ... See more...
Migration of VMs between datastores is completely transparent for Splunk running inside that VM. So as @scelikok said - as long as you have enough performance on that datastore, you should be OK but the process itself is something that your virtualization admin should handle and it's out of scope of this forum.
1. Don't use simple regexes to manipulate structured data. Earlier or later you'll regret it (you'll get yourself in a situation with some (un)escaped delimiter or similar thing). 2. This is not a w... See more...
1. Don't use simple regexes to manipulate structured data. Earlier or later you'll regret it (you'll get yourself in a situation with some (un)escaped delimiter or similar thing). 2. This is not a well-formed json. 3. Splunk doesn't handle well json (or any other structured data like XML) with additional content "surrounding" it in terms of automatic extraction so your best bet would be to extract the json part (with caution - see point 1) and run spath command on that field. Unfortunately it cannot be made as automatic extraction. It needs to be invoked manually in your search pipeline.
NO. It doesn't work in trellis layout even though the result is sorted. I am already using the following in the query: sort 0 group S_no
Hello, Just checking through if the issue was resolved or you have any further questions? 
Hello, Just checking through if the issue was resolved or you have any further questions?
Hello @splunkreal, Just checking through if the issue was resolved or you have any further questions? If not, can you please accept the answer, so anyone in the future having the same question can ge... See more...
Hello @splunkreal, Just checking through if the issue was resolved or you have any further questions? If not, can you please accept the answer, so anyone in the future having the same question can get the solution quickly?
Hello @maverick27 sort should work in that case right? ie. -  | sort GroupNum S_no
Hi All, Is it possible to use Splunk for tracking logs from SAP CPQ, CPI, C4C? I couldn't find relevant information regarding this anywhere. Appreciate your help!
Hello, Notion does not support On-premise OR Splunk Cloud Trial. Only Support Splunk Cloud Enterprise. If you use Splunk Cloud Enterprise, you need to enter the URL in the format below. https://d... See more...
Hello, Notion does not support On-premise OR Splunk Cloud Trial. Only Support Splunk Cloud Enterprise. If you use Splunk Cloud Enterprise, you need to enter the URL in the format below. https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector Send data to HTTP Event Collector on Splunk Cloud Platform You must send data using a specific URI for HEC. The standard form for the HEC URI in Splunk Cloud Platform free trials is as follows: <protocol>://http-inputs-<host>.splunkcloud.com:<port>/<endpoint> The standard form for the HEC URI in Splunk Cloud Platform is as follows: <protocol>://http-inputs-<host>.splunkcloud.com:<port>/<endpoint> The standard form for the HEC URI in Splunk Cloud Platform on Google Cloud is as follows: <protocol>://http-inputs.<host>.splunkcloud.com:<port>/<endpoint> The standard form for the HEC URI in Splunk Cloud Fedramp Moderate on AWS Govcloud is as follows: <protocol>://http-inputs.<host>.splunkcloudgc.com:<port>/<endpoint>  
Hello @SplunkDash, Can you please check below -  | makeresults | eval _raw="accid,nameA,addressA,cellA 002,test1,tadd1,1234 003,test2,tadd2,1256 003,test2,tadd2,5674 004,test3,tadd3,2345 005,test4,... See more...
Hello @SplunkDash, Can you please check below -  | makeresults | eval _raw="accid,nameA,addressA,cellA 002,test1,tadd1,1234 003,test2,tadd2,1256 003,test2,tadd2,5674 004,test3,tadd3,2345 005,test4,tadd4,4567 006,test5,tadd5,7800 006,test5,tadd5,9900" | multikv forceheader=1 | eval sourcetype="sourcetypeA" | append [| makeresults | eval _raw="accid,nameB,addressB,cellB 002,test1,tadd1,1234 003,test2,tadd2,5674 004,test3,tadd3,2345 005,test4,tadd3,4567 006,test5,tadd5,9900" | multikv forceheader=1 | eval sourcetype="sourcetypeB" ] | kv | stats values(*) as * by accid | where mvcount(nameA) != mvcount(nameB) OR mvcount(addressA) != mvcount(addressB) OR mvcount(cellA) != mvcount(cellB)   Please let me know if you have any questions for the above. Please accept the solution and hit Karma, if this helps!
Hello, Thankyou @ITWhisperer @meetmshah for the quick revert and apologies for the delay in response. The solution indeed works. However, when I try to create a trellis layout (split by S_no), the... See more...
Hello, Thankyou @ITWhisperer @meetmshah for the quick revert and apologies for the delay in response. The solution indeed works. However, when I try to create a trellis layout (split by S_no), the graphs are displayed in the original order (1,3,2,4,5,6) and not how I want it to be i.e. 1,2,3,4,5,6.  Is this a bug by any chance? 
@danspav  Hello, Thank you for your answer, it was very helpful! I suspected it had something to do with default and submitted token topics, but even though I did my searches online I did not find... See more...
@danspav  Hello, Thank you for your answer, it was very helpful! I suspected it had something to do with default and submitted token topics, but even though I did my searches online I did not find any clear explanation. In this regard, do you have a link to share with me that can explain these topics once for all? I really would like to have a clear understanding that will allow me to avoid every time to test my tokens behavior in my dashboards (I need some solid understanding here ). PS: I really didn't know that you could call submitted tokens by just typing submitted: before the token name. Veeeery helpful!!
I am stuck at 'waiting for connection' whereas the agent connection is showing green and connected as shown in the picture below. Can somebody help me, please?   ^ Post edited by @... See more...
I am stuck at 'waiting for connection' whereas the agent connection is showing green and connected as shown in the picture below. Can somebody help me, please?   ^ Post edited by @Ryan.Paredez to edit a screenshot to redact the Controller name and URL. Please do not share your Account name or Controller URL in Community posts for security and privacy reasons.
I am using regex to extract the field from the below json data. I want to extract the fields in key-value pair specially log.message from the json data. Example if I need "action" field from log.mess... See more...
I am using regex to extract the field from the below json data. I want to extract the fields in key-value pair specially log.message from the json data. Example if I need "action" field from log.message clusterName: cluster-9gokdwng4f internal_tag: internal_security log: { [-] message: {"action":"EXECUTE","class":"System-Queue","eventC":"Data access event","eventT":"Obj-Open with role","timeStamp":"Wed 2024 Apr 03, 04:58:28:932"} stack: thread_name: Batch-1 timestamp: 2024-04-03T04:58:28.932Z version: 1 } }
Hi Everyone, Is anyone else having issues with the Client tab not showing the correct Server Classes for the Host Names? For example, we have windows systems that are being labeled as Linux because ... See more...
Hi Everyone, Is anyone else having issues with the Client tab not showing the correct Server Classes for the Host Names? For example, we have windows systems that are being labeled as Linux because we have a server class with a filter * but specific to linux-86_64 Machine Type. This almost gave me a heart attack because I thought the apps tied to this server class was going to replace the Windows ones. However, when I go into the server class itself, the "Matched" tab only shows the devices that match the filter and when I check a handful of Windows devices itself, I don't see the apps that are tied with the Linux server class. Wondering if anyone is experiencing this as well? And if so, if a fix is found.
Minor point, but the number of seconds in a day is 86400, not 86000.