All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, Is it possible to use Splunk for tracking logs from SAP CPQ, CPI, C4C? I couldn't find relevant information regarding this anywhere. Appreciate your help!
Hello, Notion does not support On-premise OR Splunk Cloud Trial. Only Support Splunk Cloud Enterprise. If you use Splunk Cloud Enterprise, you need to enter the URL in the format below. https://d... See more...
Hello, Notion does not support On-premise OR Splunk Cloud Trial. Only Support Splunk Cloud Enterprise. If you use Splunk Cloud Enterprise, you need to enter the URL in the format below. https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector Send data to HTTP Event Collector on Splunk Cloud Platform You must send data using a specific URI for HEC. The standard form for the HEC URI in Splunk Cloud Platform free trials is as follows: <protocol>://http-inputs-<host>.splunkcloud.com:<port>/<endpoint> The standard form for the HEC URI in Splunk Cloud Platform is as follows: <protocol>://http-inputs-<host>.splunkcloud.com:<port>/<endpoint> The standard form for the HEC URI in Splunk Cloud Platform on Google Cloud is as follows: <protocol>://http-inputs.<host>.splunkcloud.com:<port>/<endpoint> The standard form for the HEC URI in Splunk Cloud Fedramp Moderate on AWS Govcloud is as follows: <protocol>://http-inputs.<host>.splunkcloudgc.com:<port>/<endpoint>  
Hello @SplunkDash, Can you please check below -  | makeresults | eval _raw="accid,nameA,addressA,cellA 002,test1,tadd1,1234 003,test2,tadd2,1256 003,test2,tadd2,5674 004,test3,tadd3,2345 005,test4,... See more...
Hello @SplunkDash, Can you please check below -  | makeresults | eval _raw="accid,nameA,addressA,cellA 002,test1,tadd1,1234 003,test2,tadd2,1256 003,test2,tadd2,5674 004,test3,tadd3,2345 005,test4,tadd4,4567 006,test5,tadd5,7800 006,test5,tadd5,9900" | multikv forceheader=1 | eval sourcetype="sourcetypeA" | append [| makeresults | eval _raw="accid,nameB,addressB,cellB 002,test1,tadd1,1234 003,test2,tadd2,5674 004,test3,tadd3,2345 005,test4,tadd3,4567 006,test5,tadd5,9900" | multikv forceheader=1 | eval sourcetype="sourcetypeB" ] | kv | stats values(*) as * by accid | where mvcount(nameA) != mvcount(nameB) OR mvcount(addressA) != mvcount(addressB) OR mvcount(cellA) != mvcount(cellB)   Please let me know if you have any questions for the above. Please accept the solution and hit Karma, if this helps!
Hello, Thankyou @ITWhisperer @meetmshah for the quick revert and apologies for the delay in response. The solution indeed works. However, when I try to create a trellis layout (split by S_no), the... See more...
Hello, Thankyou @ITWhisperer @meetmshah for the quick revert and apologies for the delay in response. The solution indeed works. However, when I try to create a trellis layout (split by S_no), the graphs are displayed in the original order (1,3,2,4,5,6) and not how I want it to be i.e. 1,2,3,4,5,6.  Is this a bug by any chance? 
@danspav  Hello, Thank you for your answer, it was very helpful! I suspected it had something to do with default and submitted token topics, but even though I did my searches online I did not find... See more...
@danspav  Hello, Thank you for your answer, it was very helpful! I suspected it had something to do with default and submitted token topics, but even though I did my searches online I did not find any clear explanation. In this regard, do you have a link to share with me that can explain these topics once for all? I really would like to have a clear understanding that will allow me to avoid every time to test my tokens behavior in my dashboards (I need some solid understanding here ). PS: I really didn't know that you could call submitted tokens by just typing submitted: before the token name. Veeeery helpful!!
I am stuck at 'waiting for connection' whereas the agent connection is showing green and connected as shown in the picture below. Can somebody help me, please?   ^ Post edited by @... See more...
I am stuck at 'waiting for connection' whereas the agent connection is showing green and connected as shown in the picture below. Can somebody help me, please?   ^ Post edited by @Ryan.Paredez to edit a screenshot to redact the Controller name and URL. Please do not share your Account name or Controller URL in Community posts for security and privacy reasons.
I am using regex to extract the field from the below json data. I want to extract the fields in key-value pair specially log.message from the json data. Example if I need "action" field from log.mess... See more...
I am using regex to extract the field from the below json data. I want to extract the fields in key-value pair specially log.message from the json data. Example if I need "action" field from log.message clusterName: cluster-9gokdwng4f internal_tag: internal_security log: { [-] message: {"action":"EXECUTE","class":"System-Queue","eventC":"Data access event","eventT":"Obj-Open with role","timeStamp":"Wed 2024 Apr 03, 04:58:28:932"} stack: thread_name: Batch-1 timestamp: 2024-04-03T04:58:28.932Z version: 1 } }
Hi Everyone, Is anyone else having issues with the Client tab not showing the correct Server Classes for the Host Names? For example, we have windows systems that are being labeled as Linux because ... See more...
Hi Everyone, Is anyone else having issues with the Client tab not showing the correct Server Classes for the Host Names? For example, we have windows systems that are being labeled as Linux because we have a server class with a filter * but specific to linux-86_64 Machine Type. This almost gave me a heart attack because I thought the apps tied to this server class was going to replace the Windows ones. However, when I go into the server class itself, the "Matched" tab only shows the devices that match the filter and when I check a handful of Windows devices itself, I don't see the apps that are tied with the Linux server class. Wondering if anyone is experiencing this as well? And if so, if a fix is found.
Minor point, but the number of seconds in a day is 86400, not 86000.
You can do something like this - I don't know what you mean by the durable_cursor, but this will append the list of scheduled save searches with the calculated next scheduled time from the rest data ... See more...
You can do something like this - I don't know what you mean by the durable_cursor, but this will append the list of scheduled save searches with the calculated next scheduled time from the rest data and then join the data together based on the saved search name search your skipped searches calculate durable_cursor | append [ | rest splunk_server=local "/servicesNS/-/-/saved/searches" search="is_scheduled=1 disabled=0" | eval next_scheduled_time_e=strptime(next_scheduled_time, "%F %T %Z") | fields title next_scheduled_time_e | rename title as savedsearch_name ] | stats values(*) as * max(durable_cursor) as durable_cursor by savedsearch_name | where next_scheduled_time_e>durable_cursor  
Hi All, I am having a requirement like this.  First I need to fetch all the failed searches (lets say skipped searches) by their savedsearch_name and scheduled_time.  If it is skipped on that sche... See more...
Hi All, I am having a requirement like this.  First I need to fetch all the failed searches (lets say skipped searches) by their savedsearch_name and scheduled_time.  If it is skipped on that scheduled_time, Then I need to check if that scheduled_time lies between  durable_cursor AND next scheduled_time Lets say savedsearch_name- called ABC failed (Skipped) at 1712121019.  So now I need to search if this above failed scheduled_time value lies between upcoming durable_cursor and next scheduled_time. The next scheduled_time is 1712121300 and in this event I see durable_cursor value is 1712121000. Which means my failed time covered in this run.  How to detect this via a splunk query. My failed searches are covered or not in next run. I tried to apply subsearch logic to get failed savedsearch_name and scheduled_time. I can pass savedsearch_name but not the scheduled_time. So my idea is I need to run a first query to take failed savedsearch name and its associated failed scheduled_time. And in the second query I need to check if scheduled_time lies between durable_cursor and next scheduled_time. How to achieve this.    Any inputs would be appreciated. Thanks 
안녕, 난 릴리야. 아래 차트를 저장했습니다. 하지만 내 대시보드에는 아래와 같이 표시됩니다. 왜 다르게 표시되나요?  
One caveat with migrating filesystems directly between different instances of OS - it's relatively unlikely (especially with clean system installation) but as the file/directory ownership in the file... See more...
One caveat with migrating filesystems directly between different instances of OS - it's relatively unlikely (especially with clean system installation) but as the file/directory ownership in the filesystem is set with UID/GID only, you might find yourself in a situation where UID/GID values of new/old system don't match. So - for example your splunk:splunk user might match 1002:1002 in your old Linux instance but your new one might be mapping splunk:splunk to 1005:1004 and 1002:1002 could be used by some interactive user. So you might want to be doubly cautious where moving such filesystems between separate OS-es. As far as I remember if packing/unpacking with tar ownership is by default preserved with username/groupname and used if such username/groupname are found in the OS when unpacking (unless you explicitly tell tar to just use numeric IDs).
Hi @CheongKing168 , try to reinstall the old version, to understand if the issue is related to the UF version or to environment. Then open a case to Splunk Support. One additional info: which Wind... See more...
Hi @CheongKing168 , try to reinstall the old version, to understand if the issue is related to the UF version or to environment. Then open a case to Splunk Support. One additional info: which Windows version are you using? Ciao. Giuseppe
@NoIdea, There are different namespaces for tokens - default, submitted, and environment. You're running into the issue because you're using the "default" tokens. These are the ones we normally  us... See more...
@NoIdea, There are different namespaces for tokens - default, submitted, and environment. You're running into the issue because you're using the "default" tokens. These are the ones we normally  use as they are updated on the fly, whereas the submitted tokens are only updated after clicking the submit button. You can refer to these tokens using the namespace followed by a colon, eg: Default: $tok1$ Submitted: $submitted:tok1$ I've tried to understand the values you've put for the tokens and made an alternative dashboard showing the use of submitted tokens: <form version="1.1" theme="light"> <label>answers</label> <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="tok1" searchWhenChanged="false"> <label>Tok1</label> <choice value="All">*</choice> <choice value="&quot; &quot;AND upper(STATUS)=upper('Active')&quot;">Y</choice> <choice value="&quot; &quot;AND upper(STATUS)=upper('Inactive')&quot;">N</choice> <prefix>Status="</prefix> <default>*</default> </input> <input type="text" token="tok2" searchWhenChanged="false"> <label>UserID</label> <default></default> <prefix> AND UserID=\"*" + upper(</prefix> <suffix>) + "*"</suffix> </input> </fieldset> <row> <panel id="table_1"> <html><h2>Using $$tok1$$</h2><table><tr><td><strong>$$tok1$$=</strong></td><td><textarea>$tok1$</textarea></td></tr><tr><td><strong>$$tok2$$=</strong></td><td><textarea>$tok2$</textarea></td></tr></table> <style>textarea{padding: 4px; font-size:16px;resize:none;width: 300px;border: 1px solid black;} div[id^="table"] td{border:1px solid black;padding: 4px;} div[id^="table"]{width: fit-content; } </style> </html> </panel> </row> <row> <panel id="table_2"> <html><h2>Using $$submitted:tok1$$</h2><table><tr><td><strong>$$submitted:tok1$$=</strong></td><td><textarea>$submitted:tok1$</textarea></td></tr><tr><td><strong>$$submitted:tok2$$=</strong></td><td><textarea>$submitted:tok2$</textarea></td></tr></table></html> </panel> </row> <row> <panel> <html><h2>The Search</h2>| search * $submitted:tok1$ $submitted:tok2$ </html> </panel> </row> </form> By putting your evals and conditionals directly into the values the form should work:   Hopefully that gets you closer to what you're after. There is another way to tackle this - but I don't quite understand your search. It's almost SPL but not quite. If the above isn't what you're after, can you explain your search a bit more?
Yes. Thank You very much. It works.
Hi All, We wanted to collect Events/Metrics/Data/Logs from New Relic and send it to Splunk Enterprise and Splunk ITSI (Please provide a suitable method for this). Simultaneously, we wanted to c... See more...
Hi All, We wanted to collect Events/Metrics/Data/Logs from New Relic and send it to Splunk Enterprise and Splunk ITSI (Please provide a suitable method for this). Simultaneously, we wanted to create a new environment for Splunk Enterprise and Splunk ITSI. Please mention the suitable specification for new Splunk Enterprise and Splunk ITSI architecture.
Hi @raoul, Maybe spaces in your LastLogin field are unprintable characters. Can you try below query which cleans all whitespace? | inputlookup MSOLUsers | where match(onPremisesDistinguishedName, ... See more...
Hi @raoul, Maybe spaces in your LastLogin field are unprintable characters. Can you try below query which cleans all whitespace? | inputlookup MSOLUsers | where match(onPremisesDistinguishedName, "OU=Users") | where not isnull(LastLogin) | eval LastLogin=replace(LastLogin,"[^A-Za-z0-9,:]+","") | eval LastActive=strptime(LastLogin, "%b%d,%Y,%H:%M") | eval DaysLastActive=round((now() - LastActive) / 86000, 0) | fields Company, Department, DisplayName, LastLogin, LastActive, DaysLastActive
Hi Could you open what you are meaning with this question? Are you moving splunk indexes on your host to another file system or volume in same node or moving whole node to another box or…. There ar... See more...
Hi Could you open what you are meaning with this question? Are you moving splunk indexes on your host to another file system or volume in same node or moving whole node to another box or…. There are already couple of answers for both cases which you could found by google. We can also answer to you after we understand better your current problem. r. Ismo
Hi @KhalidAlharthi, As long as the new data store has enough performance nothing should be affected.