All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You mean something like this?   | eval date = strftime(_time, "%F") | stats min(_time) as start max(_time) as end by date | eval duration = round(end - start) | fields - start end   date dura... See more...
You mean something like this?   | eval date = strftime(_time, "%F") | stats min(_time) as start max(_time) as end by date | eval duration = round(end - start) | fields - start end   date duration 2024-10-04 61267 2024-10-05 8 Here is the emulation   | makeresults format=csv data="jobId, date, skip1, skip2, time Job1, 10/4/2024, 20241004, 10/4/2024, 0:38:27 Job1, 10/4/2024, 20241004, 10/4/2024, 0:38:41 Job 2, 10/4/2024, 20241004, 10/4/2024, 17:39:12 Job 2, 10/4/2024, 20241004, 10/4/2024, 17:39:24 Job 2, 10/4/2024, 20241004, 10/4/2024, 17:39:34 Job1, 10/5/2024, 20241004, 10/4/2024, 0:38:27 Job1, 10/5/2024, 20241004, 10/4/2024, 0:38:35" | eval _time = strptime(date . " " . time, "%m/%d/%Y %H:%M:%S") ``` data emulation above ```  
Splunk Enterprise Version: 9.2.0.1 OpenShift Version: 4.14.30   We used to have Openshift Event logs coming in under sourcetype openshift_events under index=openshift_generic_logs   However, sta... See more...
Splunk Enterprise Version: 9.2.0.1 OpenShift Version: 4.14.30   We used to have Openshift Event logs coming in under sourcetype openshift_events under index=openshift_generic_logs   However, starting Sept 29, we suddenly did not receive any logs from that index and sourcetype. The Splunkforwarders are still running and we did not do any changes to the configuration. Here is the addon.conf that we have:       004-addon.conf [general] # addons can be run in parallel with agents addon = true [input.kubernetes_events] # disable collecting kubernetes events disabled = false # override type type = openshift_events # specify Splunk index index = # (obsolete, depends on kubernetes timeout) # Set the timeout for how long request to watch events going to hang reading. # eventsWatchTimeout = 30m # (obsolete, depends on kubernetes timeout) # Ignore events last seen later that this duration. # eventsTTL = 12h # set output (splunk or devnull, default is [general]defaultOutput) output = # exclude managed fields from the metadata excludeManagedFields = true [input.kubernetes_watch::pods] # disable events disabled = false # Set the timeout for how often watch request should refresh the whole list refresh = 10m apiVersion = v1 kind = pod namespace = # override type type = openshift_objects # specify Splunk index index = # set output (splunk or devnull, default is [general]defaultOutput) output = # exclude managed fields from the metadata excludeManagedFields = true       Apologies if I'm missing something obvious here.   Thank you!
Hello @abow Can you check this article : https://splunk.my.site.com/customer/s/article/How-to-make-Splunk-Add-on-for-AWS-to-fetch-data-via-cross-account-configuration ? hope fully it will resolve you... See more...
Hello @abow Can you check this article : https://splunk.my.site.com/customer/s/article/How-to-make-Splunk-Add-on-for-AWS-to-fetch-data-via-cross-account-configuration ? hope fully it will resolve you queries.
Hi @Tiong.Koh , I apologize for the delay in my response. I reviewed this limitation and found that an idea request was previously submitted regarding this, but, it was rejected due to concern of... See more...
Hi @Tiong.Koh , I apologize for the delay in my response. I reviewed this limitation and found that an idea request was previously submitted regarding this, but, it was rejected due to concern of system performance. Additionally, the 32,767 character limit was deemed sufficient for SQL analysis at the time. Regarding the character limit, the current maximum is 32,767 characters, including white spaces. If this limitation is critical to your business processes, I recommend reaching out to your account manager or sales reps with business justification so that they can discuss with our product manager.   Regards, Martina
Hi @ilhwan  > but I don't see a magnifying glass on any of the panels Pls mouse over on the panel to the lower right corner of the panel. then you can see the magnifying glass.    For example, on... See more...
Hi @ilhwan  > but I don't see a magnifying glass on any of the panels Pls mouse over on the panel to the lower right corner of the panel. then you can see the magnifying glass.    For example, on DMC, Indexing--->Indexes and Volumes ---> Indexes and Volumes: Instance got this panel.  when i mouse over, then only the magnifying glass appears.   
Maybe I should rephrase my question to this: Why can't hot bucket roll straight to cold bucket? I get that hot bucket is actively getting written to which is why I said in my post that that's w... See more...
Maybe I should rephrase my question to this: Why can't hot bucket roll straight to cold bucket? I get that hot bucket is actively getting written to which is why I said in my post that that's what I'm thinking is why there has to be warm buckets in the first place, but all I've been told so far is that hot bucket is actively being updated and warm bucket is not which, I'm afraid, doesn't exactly answer the above question.
Hi guys, Does anyone know even with the Trial version of Splunk Observability Cloud whether it still accepts logs being sent to it directly by the Splunk Otel Collector?        According to  this p... See more...
Hi guys, Does anyone know even with the Trial version of Splunk Observability Cloud whether it still accepts logs being sent to it directly by the Splunk Otel Collector?        According to  this page  : https://docs.splunk.com/observability/en/gdi/opentelemetry/components/splunk-hec-exporter.html , it says: "Caution - Splunk Log Observer is no longer available for new users. You can continue to use Log Observer if you already have an entitlement."       As I'm using the Trial version,  I'm just curious to see how Observability Cloud processes logs via fluentd, rather than use Log Observer Connect which uses the Universal Forwarder to send logs to Splunk Cloud/Enterprise first, and then  Observability Cloud  just views log events via the integration.  Seems that Observability Cloud is not showing  the ordinary syslog or windows events which get sent to it  automatically out of the box by the  Splunk Otel Collector. Tried setting up my own log file, but nothing shows up in O11y either.
ok this should work but one wrinkle, i want to do this on two fields meaning: these are my records Job1 10/4/2024 20241004 10/4/2024 0:38:27   Job1 10/4/2024 20241004 10/4/2024 0:38:... See more...
ok this should work but one wrinkle, i want to do this on two fields meaning: these are my records Job1 10/4/2024 20241004 10/4/2024 0:38:27   Job1 10/4/2024 20241004 10/4/2024 0:38:41   Job 2 10/4/2024 20241004 10/4/2024 17:39:12   Job 2 10/4/2024 20241004 10/4/2024 17:39:24   Job 2 10/4/2024 20241004 10/4/2024 17:39:34   Job1 10/5/2024 20241004 10/4/2024 0:38:27   Job1 10/5/2024 20241004 10/4/2024 0:38:35     from this i want to be able to say: job1 took 14 seconds on 10/4/2024 and job 2 took 22 seconds on 10/4 job 1 took 8 seconds on 10/5
Agreed. I will mark this as closed and raise a new question for what am trying to do. Thanks for your help
I believe the issue might be related to field extractions. There's likely a field called delta_time or delete/create in the Search app that isn't set to global for all apps. To troubleshoot: Inspec... See more...
I believe the issue might be related to field extractions. There's likely a field called delta_time or delete/create in the Search app that isn't set to global for all apps. To troubleshoot: Inspect the search.log file. Look for entries containing "lispy". Examine the search TERMS in these entries. See if you can find anything related to the fields mentioned above. This approach might help you identify why the search isn't working as expected for users without direct index access. If you find that certain fields aren't available globally, you may need to adjust their extraction settings.
<row> <panel> <title> VIEW BY ENTITY</title> <input type="dropdown" token="tokEnvironment" searchWhenChanged="true"> <label>Domain</label> <choice value="Costing">Costing</choice> <change> <set token... See more...
<row> <panel> <title> VIEW BY ENTITY</title> <input type="dropdown" token="tokEnvironment" searchWhenChanged="true"> <label>Domain</label> <choice value="Costing">Costing</choice> <change> <set token="inputToken">""</set> <set token="outputToken">""</set> <set token="inputToken2">""</set> <set token="outputToken2">""</set> <unset token="tokSystem"></unset> <unset token="form.tokSystem"></unset> </change> <default>Cost</default> <initialValue>Cost</initialValue> </input> <input type="dropdown" token="tokSystem" searchWhenChanged="false"> <label>Data Entity</label> <fieldForLabel>$tokEnvironment$</fieldForLabel> <fieldForValue>$tokEnvironment$</fieldForValue> <search> <!--<progress>--> <!-- match attribute for condition uses eval-like expression (see Splunk search language 'eval' command) --> <!-- logic: if resultCount is 0, then show a static html element, and hide the chart element --> <!-- <condition match="'job.resultCount'== 0">--> <!-- <set token="show_html">true</set>--> <!-- </condition>--> <!-- <condition>--> <!-- <unset token="show_html"/>--> <!-- </condition>--> <!-- </progress>--> <query>| makeresults | fields - _time | eval Costing="GetQuoteByCBD,bolHeader,bolLineItems,laborProcess,costSheetCalc,FOB" | fields $tokEnvironment$ | makemv $tokEnvironment$ delim="," | mvexpand $tokEnvironment$</query> </search> <change> <condition match="$label$==&quot;get&quot;"> <set token="inputToken">get</set> <set token="outputToken">get</set> <set token="inputToken2">b</set> <set token="outputToken2">b</set> <set token="inputToken3">c</set> <set token="outputToken3">c</set> <set token="inputToken4">d</set> <set token="outputToken4">d</set> <set token="inputToken5">e</set> <set token="outputToken5">e</set> <set token="inputToken4">d</set> <set token="outputToken4">d</set> <set token="inputToken3">3</set> <set token="outputToken3">3</set> <set token="apiToken">d</set> <set token="entityToken">get</set> </condition> <condition match="$label$==&quot;batch&quot;"> <set token="inputToken">batch</set> <set token="outputToken">batch</set> <set token="inputToken2">c</set> <set token="outputToken2">c</set> <set token="inputToken">b</set> <set token="outputToken4">b</set> <set token="inputToken3">d</set> <set token="outputToken3">d</set> <set token="apiToken">b</set> <set token="inputToken5">f</set> <set token="outputToken5">f</set> <set token="entityToken">batch</set> </condition> </condition> <condition match="$label$==&quot;Calc&quot;"> <set token="inputToken">Calc</set> <set token="outputToken">Calc</set> <set token="inputToken2">init</set> <set token="outputToken2">init</set> <set token="inputToken">Calc</set> <set token="outputToken4">Calc</set> <set token="inputToken3">d</set> <set token="outputToken3">d</set> <set token="apiToken">Calc</set> <set token="entityToken">Calc</set> </condition> </change> <default>get</default> </input> <input type="time" token="time_picker" searchWhenChanged="true"> <label>Time</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <html> <u1> </u1> </html> </panel> </row> <row> <panel> <title>Init Lambda</title> <table> <search> <query>index="" source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:info:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Init Lambda - Duplicate</title> <table> <search> <query>index="" source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:warning:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Init Lambda - Error</title> <table> <search> <query>index=""source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:error:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>
My Dasboard runs the result of each row pannel even without the submit button is clicked....Trie giving the autoRun= false,searchwhenchanged=false.and submit button tag is added so search is only tri... See more...
My Dasboard runs the result of each row pannel even without the submit button is clicked....Trie giving the autoRun= false,searchwhenchanged=false.and submit button tag is added so search is only triggered when select dropdown & time is selected and they press submit button
I have two of the exact same searches and one works within the search app but not this custom internal app that package the savedsearch.   The search works for both apps until the where command is ... See more...
I have two of the exact same searches and one works within the search app but not this custom internal app that package the savedsearch.   The search works for both apps until the where command is introduced.            | eval delta_time = delete_time - create_time, hours=round(delta_time/3600,2)\ | where delta_time < (48 * 3600)\         This returns results in the search app but not in the app that houses this alert. The app is shared globally and all the objects within it. I also have the admin role with no restricted indexes or data.  
Typically this is how we set up the license slaves. I recommend try the below.  By the use the CLI commmand on the bin directory on the indexer: ./splunk edit licenser-localslave -master_uri ht... See more...
Typically this is how we set up the license slaves. I recommend try the below.  By the use the CLI commmand on the bin directory on the indexer: ./splunk edit licenser-localslave -master_uri https://licensemaster:8089 Or you can update the server.conf file with: [license] master_uri = https://mylicensemasterserver:8089   Refer: https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/LicenserCLIcommands I would recommend opening a support case if you are stuck even after this.   Hope this helps
1. If you're doing indexed extractions, your data is processed as parsed. Adding search-time extractions will only result in double fields (or misassigned fields in case of not-well-defined formats).... See more...
1. If you're doing indexed extractions, your data is processed as parsed. Adding search-time extractions will only result in double fields (or misassigned fields in case of not-well-defined formats). 2. In general, unless you have a file input with header specifying fields within that file there's no way to assign fields dynamically to indexed-extraction fields. 3. You could try making search-time extraction definitions that match only specific message templates. Like REPORT-fields-for-app1 = ^(?<Environment>\S+)\s+(?<ProductName>\S+)\s+\(?<UtcDate>\S+)\s+(<RequestId>\S+)\s+(?<ClientId>\S+)\s+(?<ClientIp>\d+\.\d+\.\d+\.\d+)$ This should match only data for app1 because it has specific number of whitespace-separated files and has IP value anchored in a particular place within an event. You can have several other similar extraction definitions, each covering separate event template.
https://www.cisco.com/c/en/us/td/docs/security/cisco-secure-cloud-app/user-guide/cisco-security-cloud-user-guide.html
To solve your problem with third-party users and time range flexibility, try this: Make a new, summary index just for this data. Set up an automatic search that puts the data required in this new ... See more...
To solve your problem with third-party users and time range flexibility, try this: Make a new, summary index just for this data. Set up an automatic search that puts the data required in this new index regularly. Change your dashboard to use this new index. Give the third-party users access to only this new index. Now you can add a time range picker to your dashboard. Hope this helps. Karma would be appreciated. 
       1. You can start with your base search.  Add a time range and average calculation: index=* tag=name NOT "health-*" (words="Authentication words" OR MESSAGE_TEXT="Authentication word") | ... See more...
       1. You can start with your base search.  Add a time range and average calculation: index=* tag=name NOT "health-*" (words="Authentication words" OR MESSAGE_TEXT="Authentication word") | bucket _time span=1d | stats count as daily_count by host, _time | stats avg(daily_count) as avg_daily_count by host           3. Create a dashboard and add a table panel using this search.         4. Add visualizations like bar charts to represent the data graphically Key Metrics to Track: Total AUTHZ attempts Successful vs. failed authorizations logins Authorization attempts by host Authorization attempts by user Peak authorization times Unusual patterns or anomalies Dashboard Components: Summary statistics panel Time series graph of authorization attempts Top hosts by authorization usage (table or bar chart) Top users by authorization attempts (table or bar chart) Geographical map of authorization attempts (if applicable) Failed authorization attempts breakdown      Below Links should help you out. Refer: https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchTutorial/Createnewdashboard https://www.splunk.com/en_us/resources/videos/create-dashboard-in-splunk-enterprise.html https://splunkbase.splunk.com/app/1603 Hope this helps  
index= name tag=name NOT "health-*" words="Authentication words" OR MESSAGE_TEXT="Authentication word" | stats count by host | table host,count
@sainag_splunk  This method did work until I found out that the User who are viewing the dashboard are not able to see the results, and its due to not having access to the Index. The users who ar... See more...
@sainag_splunk  This method did work until I found out that the User who are viewing the dashboard are not able to see the results, and its due to not having access to the Index. The users who are viewing this dashboard are third party and people that we do not want to have access to the Index (example... outside of the Org users) hence the reason the dashboard used saved reports where its viewable, but like I mentioned we faced the issue of changing the Time range picker since the saved reports are showing in a static, where we wish to make it  change as we specify a time range with the Input.   So Inline searches would not work in this scenario