All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for you answer. I've already attempted the second solution you provided, which involved installing the app on the latest Add-On Builder version and validating it. Unfortunately, this appr... See more...
Thanks for you answer. I've already attempted the second solution you provided, which involved installing the app on the latest Add-On Builder version and validating it. Unfortunately, this approach didn't resolve the issue. In the event that we decide to develop the app from scratch, could we successfully add it to Splunkbase as the next version?
Hello, Splunkers!   In the current test environment, System A functions as the master license server, while System B operates as the slave utilizing System A's license. Unfortunately, Sy... See more...
Hello, Splunkers!   In the current test environment, System A functions as the master license server, while System B operates as the slave utilizing System A's license. Unfortunately, System B lacks direct access to System A's master server. When executing the query below on System B, it yields the message "license usage logging not available for slave licensing instances" in the search results: spl index=_internal source="*license_usage.log"   Is there a method to check licensing usage per index through a search query on the Indexer server in System B? Your assistance is greatly appreciated! Thank you in advance.  
Hello,  I still did not find the solution.  I will look into back in 2017 stuffs!  Thank you
It's OKAY now. In next triggered notable, it displayed. Thank you @gcusello. 
I am not sure when they came in, but looking at a Splunk 7 instance I have, there is a mix of searchString and <search><query> in the same dashboard. Can you try converting the <searchString> to sea... See more...
I am not sure when they came in, but looking at a Splunk 7 instance I have, there is a mix of searchString and <search><query> in the same dashboard. Can you try converting the <searchString> to search/query and then addding an event handler outside the <query>, i.e. <search> <query> your_search </query> <finalized> <set token="job_sid">$job.sid$</set> </finalized> </search> Not sure when/if finalized was changed to <done>, but I have seen <preview> and <progress> event handlers in old dashboards along with <finalized>. See if any of this works.
It's been a long time, but you may find what you need in $SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/schema. The directory may be different, but there should be a simplexml.xsd schema file flo... See more...
It's been a long time, but you may find what you need in $SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/schema. The directory may be different, but there should be a simplexml.xsd schema file floating around to help you find elements, attributes, etc. The dashboard code is in search_mrsparkle as something like dashboard_*.js. You may need to de-minify the source, but you can search it and related files to see which built-in tokens are available.
(I've been a bit fascinated by this off and on today.) I hadn't done this earlier, but a simple rex command can be used to decompose the field value into its component code points:   | makeresults... See more...
(I've been a bit fascinated by this off and on today.) I hadn't done this earlier, but a simple rex command can be used to decompose the field value into its component code points:   | makeresults | eval _raw="இடும்பைக்கு" | rex max_match=0 "(?<tmp>.)"   _raw _time tmp இடும்பைக்கு 2024-01-07 18:04:18 இ ட ு ம ் ப ை க ் க ு Determining the Unicode category requires a lookup against a Unicode database, a subset of which I've attached as tamil_unicode_block.csv converted to pdf. The general_category field determines whether a code point is a mark (M*):   | makeresults | eval _raw="இடும்பைக்கு" | rex max_match=0 "(?<char>.)" | lookup tamil_unicode_block.csv char output general_category | eval length=mvcount(mvfilter(NOT match(general_category, "^M")))   _raw _time char general_category length இடும்பைக்கு 2024-01-07 21:41:41 இ ட ு ம ் ப ை க ் க ு Lo Lo Mc Lo Mn Lo Mc Lo Mn Lo Mc 6 I don't know if this is the correct way to count Unicode "characters," but libraries do use the Unicode character database (see https://www.unicode.org/reports/tr44/) to determine the general category of code points. Splunk would have access to this functionality via e.g. libicu.
**at least I believe the <search> & <query> are not applicable to version 6.1!?
We are still running Enterprise 6.1, and I am unable to locate the relevant documentation. I would like to know if I can access the job.sid using simple xml, and if so what the syntax might be. I g... See more...
We are still running Enterprise 6.1, and I am unable to locate the relevant documentation. I would like to know if I can access the job.sid using simple xml, and if so what the syntax might be. I gather I am restricted to the <searchString> element as <search> & <query> are not relevant to version 6.1, Any assistance would be most appreciated.
It depends on the results you want.  If you expect the TA to extract fields for you then it must be installed on the HF.  If you don't care about field extractions then just install the TA on the UF.... See more...
It depends on the results you want.  If you expect the TA to extract fields for you then it must be installed on the HF.  If you don't care about field extractions then just install the TA on the UF. Either way, the TA does not need to be installed on the indexer.
Yeah, I messed that up.  I took 10% of the license rather than of the stored data.  I'll fix the post.
Hello Splunkers, I have an Architecture related question if someone can help with it please. My Architecture is like , Log Source(Linux Server)> Heavy Forwarder>Indexer  Lets say I'm on-boarding a... See more...
Hello Splunkers, I have an Architecture related question if someone can help with it please. My Architecture is like , Log Source(Linux Server)> Heavy Forwarder>Indexer  Lets say I'm on-boarding a New log source, When I'm installing an UF on my Linux server , it connects back to my Deployment Server and get the APP(Linux TA) and the output.conf APP which is basically my Heavy Forwarder details. Now my question is Do I need to have the same Linux_TA installed on my Heavy Forwarder And Indexer too ? Or as long as this TA is on Log source, it is sufficient. Hope I have explained well. Thanks for looking into this and I greatly appreciate your input. regards, Moh.   
Thanks Rick for checking  my request and for your response, I'm after understanding Auditd , As per my understanding Auditd provides more advanced logging  and it actually give you much more data in... See more...
Thanks Rick for checking  my request and for your response, I'm after understanding Auditd , As per my understanding Auditd provides more advanced logging  and it actually give you much more data insight in audit log than the standard logging which is enabled by default on the linux systems, not sure if my understand is correct here though? When we are pulling the data from a simple RHEL server using Splunk, we basically install a Splunk UF and push the TA_NIX app , which we use to basically collect everything under /var/log/* , now my understanding here is these logs that are under /var/log/* are the default logging setting on the linux which does not provides much of context on the log . for example who logged in , the username, the source IP address and the outcome which can only be achieved using Auditd rules. is it true ? Hope I was able to explain well this time, Appreciate if anyone can provide more insight on this.
Where did you get this 22 days value? I didn't find anything about restore rate limitation. Only that 10% of the overall storage entitlement. So if the OP has 800GB ingest subscription it includes 90... See more...
Where did you get this 22 days value? I didn't find anything about restore rate limitation. Only that 10% of the overall storage entitlement. So if the OP has 800GB ingest subscription it includes 90 days of storage by default which translates to ability to restore up to 7.2TB of data at any given point in time if I understand it correctly. (I'm not a Cloud expert, that's what I understand from Splunk websites so if I'm wrong feel free to correct me)
A use case in this context is typically a search returning results corresponding to some security scenario. Like finding excessive failed logins or sequence of logins from a geographically distant pl... See more...
A use case in this context is typically a search returning results corresponding to some security scenario. Like finding excessive failed logins or sequence of logins from a geographically distant places in a short period of time. You need to check what data you have available, what you want to find and think how to find it. Free Security Essentials app is indeed a good source for possible use cases.
My teacher gave me this task: "You need to apply at least 3 different use cases that we will change according to your scenario. Show various use cases on the Dashboards you create. You can refer to... See more...
My teacher gave me this task: "You need to apply at least 3 different use cases that we will change according to your scenario. Show various use cases on the Dashboards you create. You can refer to sample use cases on the Internet or in the Security Essentials application on Splunk." but I don't know how to do this task. He gives us an empty Splunk Server and this task. How can I create a use-case scenario? Thank you for your time...
If you use Splunk Auto Archive (DDAA) service then it will take 10 days to restore all 1.7TB of data.  Each chunk restored remains searchable for 30 days so you'll have only 20 during which the whole... See more...
If you use Splunk Auto Archive (DDAA) service then it will take 10 days to restore all 1.7TB of data.  Each chunk restored remains searchable for 30 days so you'll have only 20 during which the whole thing can be searched.  Restored data is treated much the same as thawed data in that it is indexed and searchable, but is not subject to the index retention time.  Splunk Cloud automatically removes the restored after 30 days.  See https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Admin/DataArchiver#Restore_archived_data_to_Splunk_Cloud_Platform for details. If you use Splunk's Self Service archive (DDSS) then the data must be restored to an on-prem (or private cloud) instance much the same way you would restore frozen data in Splunk Enterprise.  There are no time limits for restored DDSS data.  See https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Admin/DataSelfStorage#Restore_indexed_data_from_a_self_storage_location for more.
It does seem like a bug in optimizer. I did a search index=winevents sourcetype=WinEventLog | where sourcetype="WinEventLog" It did not return any results - as was in your case. But when I opene... See more...
It does seem like a bug in optimizer. I did a search index=winevents sourcetype=WinEventLog | where sourcetype="WinEventLog" It did not return any results - as was in your case. But when I opened the job inspector it showed me the search optimized to search (index="winevents" sourcetype=WinEventLog sourcetype=CASE("WinEventLog"))  
In a quick test, TRUNCATE truncates the event following the last complete UTF-8 character less than the byte limit. No partial code points are indexed.
Normalization may not help for Tamil, which doesn't appear to have canonically equivalent composed forms of most characters in Unicode. I.e. The string இடும்பைக்கு can only (?) be represented in Unic... See more...
Normalization may not help for Tamil, which doesn't appear to have canonically equivalent composed forms of most characters in Unicode. I.e. The string இடும்பைக்கு can only (?) be represented in Unicode using11 code points.