All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Everyone, Is anyone else having issues with the Client tab not showing the correct Server Classes for the Host Names? For example, we have windows systems that are being labeled as Linux because ... See more...
Hi Everyone, Is anyone else having issues with the Client tab not showing the correct Server Classes for the Host Names? For example, we have windows systems that are being labeled as Linux because we have a server class with a filter * but specific to linux-86_64 Machine Type. This almost gave me a heart attack because I thought the apps tied to this server class was going to replace the Windows ones. However, when I go into the server class itself, the "Matched" tab only shows the devices that match the filter and when I check a handful of Windows devices itself, I don't see the apps that are tied with the Linux server class. Wondering if anyone is experiencing this as well? And if so, if a fix is found.
Minor point, but the number of seconds in a day is 86400, not 86000.
You can do something like this - I don't know what you mean by the durable_cursor, but this will append the list of scheduled save searches with the calculated next scheduled time from the rest data ... See more...
You can do something like this - I don't know what you mean by the durable_cursor, but this will append the list of scheduled save searches with the calculated next scheduled time from the rest data and then join the data together based on the saved search name search your skipped searches calculate durable_cursor | append [ | rest splunk_server=local "/servicesNS/-/-/saved/searches" search="is_scheduled=1 disabled=0" | eval next_scheduled_time_e=strptime(next_scheduled_time, "%F %T %Z") | fields title next_scheduled_time_e | rename title as savedsearch_name ] | stats values(*) as * max(durable_cursor) as durable_cursor by savedsearch_name | where next_scheduled_time_e>durable_cursor  
Hi All, I am having a requirement like this.  First I need to fetch all the failed searches (lets say skipped searches) by their savedsearch_name and scheduled_time.  If it is skipped on that sche... See more...
Hi All, I am having a requirement like this.  First I need to fetch all the failed searches (lets say skipped searches) by their savedsearch_name and scheduled_time.  If it is skipped on that scheduled_time, Then I need to check if that scheduled_time lies between  durable_cursor AND next scheduled_time Lets say savedsearch_name- called ABC failed (Skipped) at 1712121019.  So now I need to search if this above failed scheduled_time value lies between upcoming durable_cursor and next scheduled_time. The next scheduled_time is 1712121300 and in this event I see durable_cursor value is 1712121000. Which means my failed time covered in this run.  How to detect this via a splunk query. My failed searches are covered or not in next run. I tried to apply subsearch logic to get failed savedsearch_name and scheduled_time. I can pass savedsearch_name but not the scheduled_time. So my idea is I need to run a first query to take failed savedsearch name and its associated failed scheduled_time. And in the second query I need to check if scheduled_time lies between durable_cursor and next scheduled_time. How to achieve this.    Any inputs would be appreciated. Thanks 
안녕, 난 릴리야. 아래 차트를 저장했습니다. 하지만 내 대시보드에는 아래와 같이 표시됩니다. 왜 다르게 표시되나요?  
One caveat with migrating filesystems directly between different instances of OS - it's relatively unlikely (especially with clean system installation) but as the file/directory ownership in the file... See more...
One caveat with migrating filesystems directly between different instances of OS - it's relatively unlikely (especially with clean system installation) but as the file/directory ownership in the filesystem is set with UID/GID only, you might find yourself in a situation where UID/GID values of new/old system don't match. So - for example your splunk:splunk user might match 1002:1002 in your old Linux instance but your new one might be mapping splunk:splunk to 1005:1004 and 1002:1002 could be used by some interactive user. So you might want to be doubly cautious where moving such filesystems between separate OS-es. As far as I remember if packing/unpacking with tar ownership is by default preserved with username/groupname and used if such username/groupname are found in the OS when unpacking (unless you explicitly tell tar to just use numeric IDs).
Hi @CheongKing168 , try to reinstall the old version, to understand if the issue is related to the UF version or to environment. Then open a case to Splunk Support. One additional info: which Wind... See more...
Hi @CheongKing168 , try to reinstall the old version, to understand if the issue is related to the UF version or to environment. Then open a case to Splunk Support. One additional info: which Windows version are you using? Ciao. Giuseppe
@NoIdea, There are different namespaces for tokens - default, submitted, and environment. You're running into the issue because you're using the "default" tokens. These are the ones we normally  us... See more...
@NoIdea, There are different namespaces for tokens - default, submitted, and environment. You're running into the issue because you're using the "default" tokens. These are the ones we normally  use as they are updated on the fly, whereas the submitted tokens are only updated after clicking the submit button. You can refer to these tokens using the namespace followed by a colon, eg: Default: $tok1$ Submitted: $submitted:tok1$ I've tried to understand the values you've put for the tokens and made an alternative dashboard showing the use of submitted tokens: <form version="1.1" theme="light"> <label>answers</label> <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="tok1" searchWhenChanged="false"> <label>Tok1</label> <choice value="All">*</choice> <choice value="&quot; &quot;AND upper(STATUS)=upper('Active')&quot;">Y</choice> <choice value="&quot; &quot;AND upper(STATUS)=upper('Inactive')&quot;">N</choice> <prefix>Status="</prefix> <default>*</default> </input> <input type="text" token="tok2" searchWhenChanged="false"> <label>UserID</label> <default></default> <prefix> AND UserID=\"*" + upper(</prefix> <suffix>) + "*"</suffix> </input> </fieldset> <row> <panel id="table_1"> <html><h2>Using $$tok1$$</h2><table><tr><td><strong>$$tok1$$=</strong></td><td><textarea>$tok1$</textarea></td></tr><tr><td><strong>$$tok2$$=</strong></td><td><textarea>$tok2$</textarea></td></tr></table> <style>textarea{padding: 4px; font-size:16px;resize:none;width: 300px;border: 1px solid black;} div[id^="table"] td{border:1px solid black;padding: 4px;} div[id^="table"]{width: fit-content; } </style> </html> </panel> </row> <row> <panel id="table_2"> <html><h2>Using $$submitted:tok1$$</h2><table><tr><td><strong>$$submitted:tok1$$=</strong></td><td><textarea>$submitted:tok1$</textarea></td></tr><tr><td><strong>$$submitted:tok2$$=</strong></td><td><textarea>$submitted:tok2$</textarea></td></tr></table></html> </panel> </row> <row> <panel> <html><h2>The Search</h2>| search * $submitted:tok1$ $submitted:tok2$ </html> </panel> </row> </form> By putting your evals and conditionals directly into the values the form should work:   Hopefully that gets you closer to what you're after. There is another way to tackle this - but I don't quite understand your search. It's almost SPL but not quite. If the above isn't what you're after, can you explain your search a bit more?
Yes. Thank You very much. It works.
Hi All, We wanted to collect Events/Metrics/Data/Logs from New Relic and send it to Splunk Enterprise and Splunk ITSI (Please provide a suitable method for this). Simultaneously, we wanted to c... See more...
Hi All, We wanted to collect Events/Metrics/Data/Logs from New Relic and send it to Splunk Enterprise and Splunk ITSI (Please provide a suitable method for this). Simultaneously, we wanted to create a new environment for Splunk Enterprise and Splunk ITSI. Please mention the suitable specification for new Splunk Enterprise and Splunk ITSI architecture.
Hi @raoul, Maybe spaces in your LastLogin field are unprintable characters. Can you try below query which cleans all whitespace? | inputlookup MSOLUsers | where match(onPremisesDistinguishedName, ... See more...
Hi @raoul, Maybe spaces in your LastLogin field are unprintable characters. Can you try below query which cleans all whitespace? | inputlookup MSOLUsers | where match(onPremisesDistinguishedName, "OU=Users") | where not isnull(LastLogin) | eval LastLogin=replace(LastLogin,"[^A-Za-z0-9,:]+","") | eval LastActive=strptime(LastLogin, "%b%d,%Y,%H:%M") | eval DaysLastActive=round((now() - LastActive) / 86000, 0) | fields Company, Department, DisplayName, LastLogin, LastActive, DaysLastActive
Hi Could you open what you are meaning with this question? Are you moving splunk indexes on your host to another file system or volume in same node or moving whole node to another box or…. There ar... See more...
Hi Could you open what you are meaning with this question? Are you moving splunk indexes on your host to another file system or volume in same node or moving whole node to another box or…. There are already couple of answers for both cases which you could found by google. We can also answer to you after we understand better your current problem. r. Ismo
Hi @KhalidAlharthi, As long as the new data store has enough performance nothing should be affected.  
Hi You could learn how to use SPL from your local instructions or https://docs.splunk.com/Documentation/Splunk/9.2.1/Search/GetstartedwithSearch We didn’t know your data, indexes etc. so we can’t he... See more...
Hi You could learn how to use SPL from your local instructions or https://docs.splunk.com/Documentation/Splunk/9.2.1/Search/GetstartedwithSearch We didn’t know your data, indexes etc. so we can’t help you, especially when we don’t know what you want to know. r. Ismo
Clear. So, an event with _time field with "+", in practice, represents a complete _time extraction with all "date_*" underfields inside Thanks
Hi @CarolinaHB, I noticed that "#012" exists in your event as end of event marker. You can use below as a line breaker; LINE_BREAKER=#012()    
open the "Search & Reporting" application, and find through SPL searches against all data the password utilized during the PsExec activity
Hi @mahesh27, You can filter results like below; | mstats sum(count-error) as Failed where index=metrics_index by service errorNumber errortype | sort 4 - Failed    
Hello,  I need to event break the following events, but they have a different date format. At the beginning, only at the end, it ends with the 'keyprotectiontype' field, which sometimes has 'NA'. Ad... See more...
Hello,  I need to event break the following events, but they have a different date format. At the beginning, only at the end, it ends with the 'keyprotectiontype' field, which sometimes has 'NA'. Additionally, it must always have the 'reason' field at the beginning.   Apr 2 22:18:08 04-02 22: 17:39#011reason=Allowed#011event_id=7353490211603742721#011protocol=HTTP#011action=Allowed#011transactionsize=345241#011responsesize=344806#011requestsize=435#011urlcategory=Operating System and Software Updates#011serverip=92.123.121.156#011requestmethod=GET#011refererURL=None#011useragent=Microsoft BITS/7.8#011product=NSS#011location=Road Warrior#011ClientIP=12.2.11.10#011status=206#011user=lvtorrea@lula.com.es#011url=2.tlu.dl.delivery.mp.microsoft.com/filestreamingservice/files/20c818db-67ad-44d4-8409-4d9dd7986af1?P1=1712128627&P2=404&P3=2&P4=OEkaO+U5XHKvf+lM41oEFDeIKRAD9S6SWgch3BSzA/yxusk1LA44YVdjNg94soDh+D8bYKjPHLpS4296pI6Tcw==#011vendor=Zscaler#011hostname=dkdkdk #011clientpublicIP=1.111.120.11#011threatcategory=None#011threatname=None#011filetype=None#011appname=General Browsing#011pagerisk=0#011threatseverity=None#011department=XXXXX (1422)#011urlsupercategory=Information Technology#011appclass=General Browsing#011dlpengine=None#011urlclass=Business Use#011threatclass=None#011dlpdictionaries=None#011fileclass=None#011bwthrottle=NO#011contenttype=application/octet_stream#011unscannabletype=None#011devicehostname=MAA#011deviceowner=lvtorrea#011keyprotectiontype= Software Protection#0122024-04-02 22:17:39#011reason=Allowed#011event_id=7353490211788947457#011protocol=SSL#011action=Allowed#011transactionsize=9568#011responsesize=4934#011requestsize=4634#011urlcategory=Microsoft_WVD_URL#011serverip=20.189.173.26#011requestmethod=NA#011refererURL=None#011useragent=Unknown#011product=NSS#011location=Road Warrior#011ClientIP=192.168.0.147#011status=NA#011user=jlvaldezo@lula.com.es#011url=us-v10c.events.data.microsoft.com#011vendor=Zscaler#011hostname=dkdkdk#011clientpublicIP=1.19.72.10#011threatcategory=None#011threatname=None#011filetype=None#011appname=General Browsing#011pagerisk=0#011threatseverity=None#011department=xxxxxxx MANAGEMENT#011urlsupercategory=User-defined#011appclass=General Browsing#011dlpengine=None#011urlclass=Bandwidth Loss#011threatclass=None#011dlpdictionaries=None#011fileclass=None#011bwthrottle=NO#011contenttype=Other#011unscannabletype=None#011devicehostname=KDKD#011deviceowner=jlvaldezo#011keyprotectiontype=N/A#012202     Can you help me?
Hi @verbal_666, You can see related documentation below about timestamp information. The events that missing date_* fields may not have extracted time inside.   https://docs.splunk.com/Documentatio... See more...
Hi @verbal_666, You can see related documentation below about timestamp information. The events that missing date_* fields may not have extracted time inside.   https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Usedefaultfields#Use_default_fields Only events that have timestamp information in them as generated by their respective systems will have date_* fields. If an event has a date_* field, it represents the value of time/date directly from the event itself. If you have specified any timezone conversions or changed the value of the time/date at indexing or input time (for example, by setting the timestamp to be the time at index or input time), these fields will not represent that.