All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

My database contains two types of events, and I want to ensure that only the latest row for each unique TASKID is ingested into Splunk with the following requirements: Latest Status: Only the most ... See more...
My database contains two types of events, and I want to ensure that only the latest row for each unique TASKID is ingested into Splunk with the following requirements: Latest Status: Only the most recent status for each TASKID should be captured, determined by the UPDATED timestamp field. Latest Date: The row with the most recent UPDATED timestamp for each TASKID should be ingested into Splunk. Single Count: Each TASKID should appear only once in Splunk, with no duplicates or older rows included. Please help me achieve this requirement. Currently method I am using is "Rising column update" method. But still splunk is not ingesting a row with the latest status. I am using below query in SQL input under DB connect. SELECT * FROM "DB"."KSF_OVERVIEW" WHERE TASKIDUPDATED > ? ORDER BY TASKIDUPDATED ASC   Below are the sample events from the database. =====Status "FINISHED" 2024-12-06 11:50:22.984, TASKID="11933815411", TASKLABEL="11933815411", TASKIDUPDATED="11933815411 2024/12/05 19:40:47", TASKTYPEKEY="PACKGROUP", CREATED="2024-12-05 14:18:18", UPDATED="2024-12-05 19:40:47", STATUSTEXTKEY="Dynamic|TaskStatus.key{FINISHED}.textKey", CONTROLLERSTATUSTEXTKEY="Dynamic|TaskControllerStatus.taskTypeKey{PACKGROUP},key{EXECUTED}.textKey", STATUS="FINISHED", CONTROLLERSTATUS="EXECUTED", REQUIREDFINISHTIME="2024-12-06 00:00:00", STATION="PAL/Pal02", REQUIRESCUBING="0", REQUIRESQUALITYCONTROL="0", PICKINGSUBTASKCOUNT="40", TASKTYPETEXTKEY="Dynamic|TaskType.Key{PACKGROUP}.textKey", OPERATOR="1", MARSHALLINGTIME="2024-12-06 06:30:00", TSU="340447278164799274", FMBARCODE="WMC000000000341785", TSUTYPE="KKP", TOURNUMBER="2820007682", TYPE="DELIVERY", DELIVERYNUMBER="17620759", DELIVERYORDERNUMBER="3372948211", SVSSTATUS="DE_FINISHED", STORENUMBER="0000002590", STACK="11933816382", POSITION="Bottom", LCTRAINID="11935892717", MARSHALLINGAREA="WAB" =====Status "RELEASED" 2024-12-05 14:20:13.290, TASKID="11933815411", TASKLABEL="11933815411", TASKIDUPDATED="11933815411 2024/12/05 14:18:20", TASKTYPEKEY="PACKGROUP", CREATED="2024-12-05 14:18:18", UPDATED="2024-12-05 14:18:20", STATUSTEXTKEY="Dynamic|TaskStatus.key{RELEASED}.textKey", CONTROLLERSTATUSTEXTKEY="Dynamic|TaskControllerStatus.taskTypeKey{PACKGROUP},key{CREATED}.textKey", STATUS="RELEASED", CONTROLLERSTATUS="CREATED", REQUIREDFINISHTIME="2024-12-06 00:00:00", REQUIRESCUBING="0", REQUIRESQUALITYCONTROL="0", PICKINGSUBTASKCOUNT="40", TASKTYPETEXTKEY="Dynamic|TaskType.Key{PACKGROUP}.textKey", OPERATOR="1", MARSHALLINGTIME="2024-12-06 06:30:00", TSUTYPE="KKP", TOURNUMBER="2820007682", TYPE="DELIVERY", DELIVERYNUMBER="17620759", DELIVERYORDERNUMBER="3372948211", SVSSTATUS="DE_CREATED", STORENUMBER="0000002590", STACK="11933816382", POSITION="Bottom", MARSHALLINGAREA="WAB"
Great Effectively XML is quite obsolete Thanks again
1. Are you sure you even have such data in your Splunk? (and have access to it) 2. Email logs are typically a pain to work with since information about a single message is usually spread across a wh... See more...
1. Are you sure you even have such data in your Splunk? (and have access to it) 2. Email logs are typically a pain to work with since information about a single message is usually spread across a whole lot of events, often changing identifiers for the message as it goes through various stages of email processing. This includes Postfix - it can pass the message back and forth between different components and if you have amavis or external spamd in the mix... boy, you're in for a treat. 3. Unless you do something non-standard with your logging, email daemons like postfix, sendmail or exim do _not_ contain info from within the message (like subject). They typically only have the envelope info.  
One hint - while Splunk returns XML by default, it might be easier to use -d output_mode=json with your curl and use the json output - there are more easier available tools for manipulating json in s... See more...
One hint - while Splunk returns XML by default, it might be easier to use -d output_mode=json with your curl and use the json output - there are more easier available tools for manipulating json in shell than for XML. So you can "easily" do something like this: curl -k -u admin:pass https://splunksh:8089/servicesNS/-/-/saved/searches -d output_mode=json -d count=0 --get | jq '.entry | map(.) | .[] | {name: .name, app: .acl.app}' or even curl -k -u admin:pass https://splunksh:8089/servicesNS/-/-/saved/searches -d output_mode=json -d count=0 --get | jq '.entry | map(.) | .[] | .acl.app + ":" + .name'  (the jq tool is fairly easily available in modern distros while xmlint or similar stuff might not be).
HI, Splunk is a new tool to me, so I apologize for the very basic question.  Could you please provide a query that includes email delivery status with reason, or detailed information if delivered/n... See more...
HI, Splunk is a new tool to me, so I apologize for the very basic question.  Could you please provide a query that includes email delivery status with reason, or detailed information if delivered/not delivered, as well as multiple specific subject sources from Postfix?
Hi as other already said, in company and it’s security point of view this an issue and you definitely should fix it.  On Splunk Cloud this same warning has been there already (at least) couple of m... See more...
Hi as other already said, in company and it’s security point of view this an issue and you definitely should fix it.  On Splunk Cloud this same warning has been there already (at least) couple of months and also it should fix latest now.  r. Ismo
Just a beginning for shell... with script parameters (user and app in variables), i'm close enough to what i'm seeking   curl -skL -u 'usr:pwd' 'https://SHC_NODE:8089/servicesNS/admin/MYAPP/save... See more...
Just a beginning for shell... with script parameters (user and app in variables), i'm close enough to what i'm seeking   curl -skL -u 'usr:pwd' 'https://SHC_NODE:8089/servicesNS/admin/MYAPP/saved/searches?count=-1' | egrep '<title>|name="app">|name="sharing">|name="owner">|name="disabled">' | grep -v '<title>savedsearch</title>' | sed -n -e '/title/,+4p' | paste - - - - - | grep 'MYAPP' | grep 'title' | sed 's/ //g ; s/\t//g'   Perhaps not perfect, yet... but close Thanks.
The count parameter seems to be a general parameter recognized by all (?) GET endpoints. It's indeed not explicitly documented although it's hinted here https://docs.splunk.com/Documentation/Splunk/l... See more...
The count parameter seems to be a general parameter recognized by all (?) GET endpoints. It's indeed not explicitly documented although it's hinted here https://docs.splunk.com/Documentation/Splunk/latest/RESTUM/RESTusing And I don't think you can filter in the REST call itself. You have to get all results and postprocess them yourself - the eai:appName should contain the name of the app the search is defined in. (and I always use /servicesNS/-/-/ and just filter afterwards).
What is it with the latest peak of question about "sending the data into two indexer(s| clusters) while modifying one stream"? Suddenly everyone has this borderline use case? Why do that in the firs... See more...
What is it with the latest peak of question about "sending the data into two indexer(s| clusters) while modifying one stream"? Suddenly everyone has this borderline use case? Why do that in the first place? Is it really worth paying extra for double the license? What actually is your use case?
Ahhhhhhhhhhh, here we go!!! It takes also the "sharing=global" objects i understand. Are there more parameters to filter directly from the GET? I can't read them in Documentation 🤷‍ (also the ... See more...
Ahhhhhhhhhhh, here we go!!! It takes also the "sharing=global" objects i understand. Are there more parameters to filter directly from the GET? I can't read them in Documentation 🤷‍ (also the "?count=x" is not documented ) Thanks.
This will actually send raw data suitable to further processing by third party solution. It will not keep the metadata, it will not use s2s protocol, just send "TCP syslog" stream.  
Have you at least peeked into the installation manual? https://docs.splunk.com/Documentation/Splunk/latest/Installation/Whatsinthismanual
Nope. You're mistaking two different things. One is where the search is defined. Another is where it is visible. By calling /servicesNS/admin/myapp you're getting a list of apps _visible_ in contex... See more...
Nope. You're mistaking two different things. One is where the search is defined. Another is where it is visible. By calling /servicesNS/admin/myapp you're getting a list of apps _visible_ in context of user admin and app myapp. It might as well be defined in another app and shared globally.
It's... complicated. Splunk doesn't keep network-level metadata about its sources. So (apart from the values set in the default metadata fields by input settings) you can't - for example - tell from... See more...
It's... complicated. Splunk doesn't keep network-level metadata about its sources. So (apart from the values set in the default metadata fields by input settings) you can't - for example - tell from which IP the syslog data came or which UF sent particular event. You can set it on the source by using _meta setting per input but it has its own share of issues. 1. If you want to capture the source UF name or IP you'd need to set it to a different value for each UF. That's hard to maintain since - except for some very rare cases - splunk conf files don't use variables/templates so you need to set it explicitly per each host. 2. There is only a single _meta entry for each input so if you wanted to set two different values (for example -  one metadata field for a forwarder name and one for the network zone name), you can't set them in different places and have Splunk merge them into one combined setting. One would overwrite another. So while it is "kinda possible", it's not a very useful way to do so. You might be able to pull it off if you used an external tool to manage your forwarders' configs - one which supports templating and you could dynamically create those configs for forwarders. 3. Oh, and remember that if you specify [default] settings for inputs you still need a separate setting for [wineventlog] inputs - the default ones are not applied there.
If I remember correctly, Wazuh is based on OpenSearch. So you need to configure syslog input(s) on Wazuh's side and syslog export on your HF(s) and/or indexer(s) (depending on your particular archite... See more...
If I remember correctly, Wazuh is based on OpenSearch. So you need to configure syslog input(s) on Wazuh's side and syslog export on your HF(s) and/or indexer(s) (depending on your particular architecture and ingestion process).
OK. Several things here. 1. External lookup is not the same as external command. External lookup is a bit simpler version of an external command. 2. External lookup is _not_ the same as automatic l... See more...
OK. Several things here. 1. External lookup is not the same as external command. External lookup is a bit simpler version of an external command. 2. External lookup is _not_ the same as automatic lookup. An external lookup is a using SPL lookup command syntax to execute your external script while automatic lookup is a lookup which is automatically invoked on your data without the need of manually invoking the lookup command. The typical application of the automatic lookup is adjusting field values to CIM datamodel. 3. But still the lookup must match its definition so I pointed you to the fact that your stanza was named ucd_category_lookup but you were trying to use ucd_count_chars_lookup - these didn't match. Did you read https://dev.splunk.com/enterprise/docs/devtools/externallookups/ ?
Hi @arjun , you can calculate the License consuption per day using the [Settings > License > License Consuption > Past days > by index ]. using your search you have all the license consuption, you ... See more...
Hi @arjun , you can calculate the License consuption per day using the [Settings > License > License Consuption > Past days > by index ]. using your search you have all the license consuption, you cannot divide them for customer, as I already said: multitenency isn't a Community topic, it requires a Splunk PS or a Certified Architect that already did this job (like me). Ciao. Giuseppe
OK. Now let's back up a little. Explain in your own words, without using SPL what business problem you're trying to solve here. What are you trying to achieve? You're clearly trying to "implement n... See more...
OK. Now let's back up a little. Explain in your own words, without using SPL what business problem you're trying to solve here. What are you trying to achieve? You're clearly trying to "implement non-SPL thing in SPL" which is usually not a very good idea. Or at least not a very efficient one. And same things can often be achieved in a different way.
You can raise a case with Cloud support.
Hi @gcusello  i am trying to get data related to usage and billing from splunk, here is query i am using for that   index=_telemetry source=*license_usage_summary.log* | bin _time span=1d | stats... See more...
Hi @gcusello  i am trying to get data related to usage and billing from splunk, here is query i am using for that   index=_telemetry source=*license_usage_summary.log* | bin _time span=1d | stats sum(b) as TotalBytes by _time | eval GB=round(TotalBytes / (1024 * 1024 * 1024), 2) | timechart span=1d values(GB) as "Daily Indexed GB"   And per my research spulnk has few more such index like _internal and _audit  I just want to know if this is correct approach or not