All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Remove the ':' on the end of the regex and it should work. You can't get | makeresults and props to work at the same time.  makeresults creates synthetic events and props only work on real events.
Hi @richgalloway  Thanks for your reply. Apologies, for the delay in replying but I had to test it. Please see the results here: https://regex101.com/r/7u6vAP/1 Now I need to figure out as I have... See more...
Hi @richgalloway  Thanks for your reply. Apologies, for the delay in replying but I had to test it. Please see the results here: https://regex101.com/r/7u6vAP/1 Now I need to figure out as I have asked @ITWhisperer how to make both work the | makeresult | rex mode=sed ........ and the props SEDCMD-reducing_4702=? to work strip the event thus reducing its weight in bytes Thank you
Hi @ITWhisperer, Please have a look at https://regex101.com/r/wRe1Ai/1 That works in 101regex web portal, but it does not work under the makeresults and SEDCMD in props.conf I had to remove the (... See more...
Hi @ITWhisperer, Please have a look at https://regex101.com/r/wRe1Ai/1 That works in 101regex web portal, but it does not work under the makeresults and SEDCMD in props.conf I had to remove the (?ms).*(?<ei>\ as SEDCMD s/ would not accept it neither <ei> bit. Can you please work out the exact SEDCMD-reducing_4702=s/........g bit that will be compatible with the SEDCMD? Also can you try that in Splunk e.g. getting the | makeresult SPL and see if the one SPL you provide would work/remove the unwanted parts from the event? Thank you.
Hi Mario, Thanks, query worked as we input max can we do to alias the result filed name (toInt((tokenExpirationDateTime - now()) / (24*60*60*1000))))  to tokenExpirationDateTime
Again - there is no way to update an existing event within Splunk. So you can't have only the latest status. As simple as that. You can try to walk around that by maybe ingesting the state periodica... See more...
Again - there is no way to update an existing event within Splunk. So you can't have only the latest status. As simple as that. You can try to walk around that by maybe ingesting the state periodically and hold the state in a lookup or something similar but this approach doesn't scale well.
Thanks for your reply, i will try that before. If success i'll be back to Accept it as Solution so another people who have the same problem can use this step. Danke,  Zake
Thanks for your reply, i will try that before. If success i'll be back to Accept it as Solution so another people who have the same problem can use this step.
My database contains two types of events, and I want to ensure that only the latest row for each unique TASKID is ingested into Splunk with the following requirements: Latest Status: Only the most ... See more...
My database contains two types of events, and I want to ensure that only the latest row for each unique TASKID is ingested into Splunk with the following requirements: Latest Status: Only the most recent status for each TASKID should be captured, determined by the UPDATED timestamp field. Latest Date: The row with the most recent UPDATED timestamp for each TASKID should be ingested into Splunk. Single Count: Each TASKID should appear only once in Splunk, with no duplicates or older rows included. Please help me achieve this requirement. Currently method I am using is "Rising column update" method. But still splunk is not ingesting a row with the latest status. I am using below query in SQL input under DB connect. SELECT * FROM "DB"."KSF_OVERVIEW" WHERE TASKIDUPDATED > ? ORDER BY TASKIDUPDATED ASC   Below are the sample events from the database. =====Status "FINISHED" 2024-12-06 11:50:22.984, TASKID="11933815411", TASKLABEL="11933815411", TASKIDUPDATED="11933815411 2024/12/05 19:40:47", TASKTYPEKEY="PACKGROUP", CREATED="2024-12-05 14:18:18", UPDATED="2024-12-05 19:40:47", STATUSTEXTKEY="Dynamic|TaskStatus.key{FINISHED}.textKey", CONTROLLERSTATUSTEXTKEY="Dynamic|TaskControllerStatus.taskTypeKey{PACKGROUP},key{EXECUTED}.textKey", STATUS="FINISHED", CONTROLLERSTATUS="EXECUTED", REQUIREDFINISHTIME="2024-12-06 00:00:00", STATION="PAL/Pal02", REQUIRESCUBING="0", REQUIRESQUALITYCONTROL="0", PICKINGSUBTASKCOUNT="40", TASKTYPETEXTKEY="Dynamic|TaskType.Key{PACKGROUP}.textKey", OPERATOR="1", MARSHALLINGTIME="2024-12-06 06:30:00", TSU="340447278164799274", FMBARCODE="WMC000000000341785", TSUTYPE="KKP", TOURNUMBER="2820007682", TYPE="DELIVERY", DELIVERYNUMBER="17620759", DELIVERYORDERNUMBER="3372948211", SVSSTATUS="DE_FINISHED", STORENUMBER="0000002590", STACK="11933816382", POSITION="Bottom", LCTRAINID="11935892717", MARSHALLINGAREA="WAB" =====Status "RELEASED" 2024-12-05 14:20:13.290, TASKID="11933815411", TASKLABEL="11933815411", TASKIDUPDATED="11933815411 2024/12/05 14:18:20", TASKTYPEKEY="PACKGROUP", CREATED="2024-12-05 14:18:18", UPDATED="2024-12-05 14:18:20", STATUSTEXTKEY="Dynamic|TaskStatus.key{RELEASED}.textKey", CONTROLLERSTATUSTEXTKEY="Dynamic|TaskControllerStatus.taskTypeKey{PACKGROUP},key{CREATED}.textKey", STATUS="RELEASED", CONTROLLERSTATUS="CREATED", REQUIREDFINISHTIME="2024-12-06 00:00:00", REQUIRESCUBING="0", REQUIRESQUALITYCONTROL="0", PICKINGSUBTASKCOUNT="40", TASKTYPETEXTKEY="Dynamic|TaskType.Key{PACKGROUP}.textKey", OPERATOR="1", MARSHALLINGTIME="2024-12-06 06:30:00", TSUTYPE="KKP", TOURNUMBER="2820007682", TYPE="DELIVERY", DELIVERYNUMBER="17620759", DELIVERYORDERNUMBER="3372948211", SVSSTATUS="DE_CREATED", STORENUMBER="0000002590", STACK="11933816382", POSITION="Bottom", MARSHALLINGAREA="WAB"
Great Effectively XML is quite obsolete Thanks again
1. Are you sure you even have such data in your Splunk? (and have access to it) 2. Email logs are typically a pain to work with since information about a single message is usually spread across a wh... See more...
1. Are you sure you even have such data in your Splunk? (and have access to it) 2. Email logs are typically a pain to work with since information about a single message is usually spread across a whole lot of events, often changing identifiers for the message as it goes through various stages of email processing. This includes Postfix - it can pass the message back and forth between different components and if you have amavis or external spamd in the mix... boy, you're in for a treat. 3. Unless you do something non-standard with your logging, email daemons like postfix, sendmail or exim do _not_ contain info from within the message (like subject). They typically only have the envelope info.  
One hint - while Splunk returns XML by default, it might be easier to use -d output_mode=json with your curl and use the json output - there are more easier available tools for manipulating json in s... See more...
One hint - while Splunk returns XML by default, it might be easier to use -d output_mode=json with your curl and use the json output - there are more easier available tools for manipulating json in shell than for XML. So you can "easily" do something like this: curl -k -u admin:pass https://splunksh:8089/servicesNS/-/-/saved/searches -d output_mode=json -d count=0 --get | jq '.entry | map(.) | .[] | {name: .name, app: .acl.app}' or even curl -k -u admin:pass https://splunksh:8089/servicesNS/-/-/saved/searches -d output_mode=json -d count=0 --get | jq '.entry | map(.) | .[] | .acl.app + ":" + .name'  (the jq tool is fairly easily available in modern distros while xmlint or similar stuff might not be).
HI, Splunk is a new tool to me, so I apologize for the very basic question.  Could you please provide a query that includes email delivery status with reason, or detailed information if delivered/n... See more...
HI, Splunk is a new tool to me, so I apologize for the very basic question.  Could you please provide a query that includes email delivery status with reason, or detailed information if delivered/not delivered, as well as multiple specific subject sources from Postfix?
Hi as other already said, in company and it’s security point of view this an issue and you definitely should fix it.  On Splunk Cloud this same warning has been there already (at least) couple of m... See more...
Hi as other already said, in company and it’s security point of view this an issue and you definitely should fix it.  On Splunk Cloud this same warning has been there already (at least) couple of months and also it should fix latest now.  r. Ismo
Just a beginning for shell... with script parameters (user and app in variables), i'm close enough to what i'm seeking   curl -skL -u 'usr:pwd' 'https://SHC_NODE:8089/servicesNS/admin/MYAPP/save... See more...
Just a beginning for shell... with script parameters (user and app in variables), i'm close enough to what i'm seeking   curl -skL -u 'usr:pwd' 'https://SHC_NODE:8089/servicesNS/admin/MYAPP/saved/searches?count=-1' | egrep '<title>|name="app">|name="sharing">|name="owner">|name="disabled">' | grep -v '<title>savedsearch</title>' | sed -n -e '/title/,+4p' | paste - - - - - | grep 'MYAPP' | grep 'title' | sed 's/ //g ; s/\t//g'   Perhaps not perfect, yet... but close Thanks.
The count parameter seems to be a general parameter recognized by all (?) GET endpoints. It's indeed not explicitly documented although it's hinted here https://docs.splunk.com/Documentation/Splunk/l... See more...
The count parameter seems to be a general parameter recognized by all (?) GET endpoints. It's indeed not explicitly documented although it's hinted here https://docs.splunk.com/Documentation/Splunk/latest/RESTUM/RESTusing And I don't think you can filter in the REST call itself. You have to get all results and postprocess them yourself - the eai:appName should contain the name of the app the search is defined in. (and I always use /servicesNS/-/-/ and just filter afterwards).
What is it with the latest peak of question about "sending the data into two indexer(s| clusters) while modifying one stream"? Suddenly everyone has this borderline use case? Why do that in the firs... See more...
What is it with the latest peak of question about "sending the data into two indexer(s| clusters) while modifying one stream"? Suddenly everyone has this borderline use case? Why do that in the first place? Is it really worth paying extra for double the license? What actually is your use case?
Ahhhhhhhhhhh, here we go!!! It takes also the "sharing=global" objects i understand. Are there more parameters to filter directly from the GET? I can't read them in Documentation 🤷‍ (also the ... See more...
Ahhhhhhhhhhh, here we go!!! It takes also the "sharing=global" objects i understand. Are there more parameters to filter directly from the GET? I can't read them in Documentation 🤷‍ (also the "?count=x" is not documented ) Thanks.
This will actually send raw data suitable to further processing by third party solution. It will not keep the metadata, it will not use s2s protocol, just send "TCP syslog" stream.  
Have you at least peeked into the installation manual? https://docs.splunk.com/Documentation/Splunk/latest/Installation/Whatsinthismanual
Nope. You're mistaking two different things. One is where the search is defined. Another is where it is visible. By calling /servicesNS/admin/myapp you're getting a list of apps _visible_ in contex... See more...
Nope. You're mistaking two different things. One is where the search is defined. Another is where it is visible. By calling /servicesNS/admin/myapp you're getting a list of apps _visible_ in context of user admin and app myapp. It might as well be defined in another app and shared globally.