All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gcusello ,   Thanks for the feedback.. Wanted to understand , if we are changing OS on first physical node after restoring the backup. So this node will not be running on Red hat but other 3 s... See more...
Hi @gcusello ,   Thanks for the feedback.. Wanted to understand , if we are changing OS on first physical node after restoring the backup. So this node will not be running on Red hat but other 3 still running on centos. Will this node be part of clustering? Can servers with different OS part of same cluster? Thanks
Try something like this | streamstats count by ReasonCode EquipmentName reset_on_change=t global=f | where count=1
I migrated to v9.1.5 and have the TA-XLS app installed and working from a v7.3.6.  Commanding an 'outputxls' will generate a 'cannot concat str to bytes' error for the following line of the outputxl... See more...
I migrated to v9.1.5 and have the TA-XLS app installed and working from a v7.3.6.  Commanding an 'outputxls' will generate a 'cannot concat str to bytes' error for the following line of the outputxls.py file in the app:  try: csv_to_xls(os.environ['SPLUNK_HOME'] + "/etc/apps/app_name/appserver/static/fileXLS/" + output)Tried encoding by appending  .encode(encode('utf-8') to the string -> not working Tried importing the SIX and FUTURIZE/MODERNIZE libraries and ran the code to "upgrade" the script: it just added the and changed a line --> not working  from __future__ import absolute_import   Tried to define each variable, and some other --> not working  splunk_home = os.environ['SPLUNK_HOME'] static_path = '/etc/apps/app_name/appserver/static/fileXLS/' output_bytes = output csv_to_xls((splunk_home + static_path.encode(encoding='utf-8') + output))   I sort of rely on this app to work, any kind of help is needed! Thanks!            
@mrilvan there is only a Splunk app for that at the moment and nothing on the SOAR Side. However if the API is available there is nothing stopping you building a custom app in the platform as I am su... See more...
@mrilvan there is only a Splunk app for that at the moment and nothing on the SOAR Side. However if the API is available there is nothing stopping you building a custom app in the platform as I am sure XSOAR is just another REST API.  
You appear to be missing part of the answer - hot and warm buckets are normally stored on expensive fast storage, whereas (in order to reduce costs) cold buckets are stored on cheaper slower storage.... See more...
You appear to be missing part of the answer - hot and warm buckets are normally stored on expensive fast storage, whereas (in order to reduce costs) cold buckets are stored on cheaper slower storage. Using these distinctions, Splunk gives organisations the flexibility to manage the cost of their storage infrastructure.
Thanks, Its worked
  Hi Splunk Community, I’ve generated self-signed SSL certificates and configured them in web.conf, but they don't seem to be taking effect. Additionally, I am receiving the following warning messa... See more...
  Hi Splunk Community, I’ve generated self-signed SSL certificates and configured them in web.conf, but they don't seem to be taking effect. Additionally, I am receiving the following warning message when starting Splunk:   WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Could someone please help me resolve this issue? I want to ensure that Splunk uses the correct SSL certificates and that the hostname validation works properly.
How to see the preview of Splunk AI APP, We already accept terms and conditions but still we didn't get any mail notification to install the Splunk AI APP from Splunk Support.  
I don’t have any field extraction called delta_time it was created with the eval command. I tried searching all configurations and all permissions seem to be set correctly   
You mean something like this?   | eval date = strftime(_time, "%F") | stats min(_time) as start max(_time) as end by date | eval duration = round(end - start) | fields - start end   date dura... See more...
You mean something like this?   | eval date = strftime(_time, "%F") | stats min(_time) as start max(_time) as end by date | eval duration = round(end - start) | fields - start end   date duration 2024-10-04 61267 2024-10-05 8 Here is the emulation   | makeresults format=csv data="jobId, date, skip1, skip2, time Job1, 10/4/2024, 20241004, 10/4/2024, 0:38:27 Job1, 10/4/2024, 20241004, 10/4/2024, 0:38:41 Job 2, 10/4/2024, 20241004, 10/4/2024, 17:39:12 Job 2, 10/4/2024, 20241004, 10/4/2024, 17:39:24 Job 2, 10/4/2024, 20241004, 10/4/2024, 17:39:34 Job1, 10/5/2024, 20241004, 10/4/2024, 0:38:27 Job1, 10/5/2024, 20241004, 10/4/2024, 0:38:35" | eval _time = strptime(date . " " . time, "%m/%d/%Y %H:%M:%S") ``` data emulation above ```  
Splunk Enterprise Version: 9.2.0.1 OpenShift Version: 4.14.30   We used to have Openshift Event logs coming in under sourcetype openshift_events under index=openshift_generic_logs   However, sta... See more...
Splunk Enterprise Version: 9.2.0.1 OpenShift Version: 4.14.30   We used to have Openshift Event logs coming in under sourcetype openshift_events under index=openshift_generic_logs   However, starting Sept 29, we suddenly did not receive any logs from that index and sourcetype. The Splunkforwarders are still running and we did not do any changes to the configuration. Here is the addon.conf that we have:       004-addon.conf [general] # addons can be run in parallel with agents addon = true [input.kubernetes_events] # disable collecting kubernetes events disabled = false # override type type = openshift_events # specify Splunk index index = # (obsolete, depends on kubernetes timeout) # Set the timeout for how long request to watch events going to hang reading. # eventsWatchTimeout = 30m # (obsolete, depends on kubernetes timeout) # Ignore events last seen later that this duration. # eventsTTL = 12h # set output (splunk or devnull, default is [general]defaultOutput) output = # exclude managed fields from the metadata excludeManagedFields = true [input.kubernetes_watch::pods] # disable events disabled = false # Set the timeout for how often watch request should refresh the whole list refresh = 10m apiVersion = v1 kind = pod namespace = # override type type = openshift_objects # specify Splunk index index = # set output (splunk or devnull, default is [general]defaultOutput) output = # exclude managed fields from the metadata excludeManagedFields = true       Apologies if I'm missing something obvious here.   Thank you!
Hello @abow Can you check this article : https://splunk.my.site.com/customer/s/article/How-to-make-Splunk-Add-on-for-AWS-to-fetch-data-via-cross-account-configuration ? hope fully it will resolve you... See more...
Hello @abow Can you check this article : https://splunk.my.site.com/customer/s/article/How-to-make-Splunk-Add-on-for-AWS-to-fetch-data-via-cross-account-configuration ? hope fully it will resolve you queries.
Hi @Tiong.Koh , I apologize for the delay in my response. I reviewed this limitation and found that an idea request was previously submitted regarding this, but, it was rejected due to concern of... See more...
Hi @Tiong.Koh , I apologize for the delay in my response. I reviewed this limitation and found that an idea request was previously submitted regarding this, but, it was rejected due to concern of system performance. Additionally, the 32,767 character limit was deemed sufficient for SQL analysis at the time. Regarding the character limit, the current maximum is 32,767 characters, including white spaces. If this limitation is critical to your business processes, I recommend reaching out to your account manager or sales reps with business justification so that they can discuss with our product manager.   Regards, Martina
Hi @ilhwan  > but I don't see a magnifying glass on any of the panels Pls mouse over on the panel to the lower right corner of the panel. then you can see the magnifying glass.    For example, on... See more...
Hi @ilhwan  > but I don't see a magnifying glass on any of the panels Pls mouse over on the panel to the lower right corner of the panel. then you can see the magnifying glass.    For example, on DMC, Indexing--->Indexes and Volumes ---> Indexes and Volumes: Instance got this panel.  when i mouse over, then only the magnifying glass appears.   
Maybe I should rephrase my question to this: Why can't hot bucket roll straight to cold bucket? I get that hot bucket is actively getting written to which is why I said in my post that that's w... See more...
Maybe I should rephrase my question to this: Why can't hot bucket roll straight to cold bucket? I get that hot bucket is actively getting written to which is why I said in my post that that's what I'm thinking is why there has to be warm buckets in the first place, but all I've been told so far is that hot bucket is actively being updated and warm bucket is not which, I'm afraid, doesn't exactly answer the above question.
Hi guys, Does anyone know even with the Trial version of Splunk Observability Cloud whether it still accepts logs being sent to it directly by the Splunk Otel Collector?        According to  this p... See more...
Hi guys, Does anyone know even with the Trial version of Splunk Observability Cloud whether it still accepts logs being sent to it directly by the Splunk Otel Collector?        According to  this page  : https://docs.splunk.com/observability/en/gdi/opentelemetry/components/splunk-hec-exporter.html , it says: "Caution - Splunk Log Observer is no longer available for new users. You can continue to use Log Observer if you already have an entitlement."       As I'm using the Trial version,  I'm just curious to see how Observability Cloud processes logs via fluentd, rather than use Log Observer Connect which uses the Universal Forwarder to send logs to Splunk Cloud/Enterprise first, and then  Observability Cloud  just views log events via the integration.  Seems that Observability Cloud is not showing  the ordinary syslog or windows events which get sent to it  automatically out of the box by the  Splunk Otel Collector. Tried setting up my own log file, but nothing shows up in O11y either.
ok this should work but one wrinkle, i want to do this on two fields meaning: these are my records Job1 10/4/2024 20241004 10/4/2024 0:38:27   Job1 10/4/2024 20241004 10/4/2024 0:38:... See more...
ok this should work but one wrinkle, i want to do this on two fields meaning: these are my records Job1 10/4/2024 20241004 10/4/2024 0:38:27   Job1 10/4/2024 20241004 10/4/2024 0:38:41   Job 2 10/4/2024 20241004 10/4/2024 17:39:12   Job 2 10/4/2024 20241004 10/4/2024 17:39:24   Job 2 10/4/2024 20241004 10/4/2024 17:39:34   Job1 10/5/2024 20241004 10/4/2024 0:38:27   Job1 10/5/2024 20241004 10/4/2024 0:38:35     from this i want to be able to say: job1 took 14 seconds on 10/4/2024 and job 2 took 22 seconds on 10/4 job 1 took 8 seconds on 10/5
Agreed. I will mark this as closed and raise a new question for what am trying to do. Thanks for your help
I believe the issue might be related to field extractions. There's likely a field called delta_time or delete/create in the Search app that isn't set to global for all apps. To troubleshoot: Inspec... See more...
I believe the issue might be related to field extractions. There's likely a field called delta_time or delete/create in the Search app that isn't set to global for all apps. To troubleshoot: Inspect the search.log file. Look for entries containing "lispy". Examine the search TERMS in these entries. See if you can find anything related to the fields mentioned above. This approach might help you identify why the search isn't working as expected for users without direct index access. If you find that certain fields aren't available globally, you may need to adjust their extraction settings.
<row> <panel> <title> VIEW BY ENTITY</title> <input type="dropdown" token="tokEnvironment" searchWhenChanged="true"> <label>Domain</label> <choice value="Costing">Costing</choice> <change> <set token... See more...
<row> <panel> <title> VIEW BY ENTITY</title> <input type="dropdown" token="tokEnvironment" searchWhenChanged="true"> <label>Domain</label> <choice value="Costing">Costing</choice> <change> <set token="inputToken">""</set> <set token="outputToken">""</set> <set token="inputToken2">""</set> <set token="outputToken2">""</set> <unset token="tokSystem"></unset> <unset token="form.tokSystem"></unset> </change> <default>Cost</default> <initialValue>Cost</initialValue> </input> <input type="dropdown" token="tokSystem" searchWhenChanged="false"> <label>Data Entity</label> <fieldForLabel>$tokEnvironment$</fieldForLabel> <fieldForValue>$tokEnvironment$</fieldForValue> <search> <!--<progress>--> <!-- match attribute for condition uses eval-like expression (see Splunk search language 'eval' command) --> <!-- logic: if resultCount is 0, then show a static html element, and hide the chart element --> <!-- <condition match="'job.resultCount'== 0">--> <!-- <set token="show_html">true</set>--> <!-- </condition>--> <!-- <condition>--> <!-- <unset token="show_html"/>--> <!-- </condition>--> <!-- </progress>--> <query>| makeresults | fields - _time | eval Costing="GetQuoteByCBD,bolHeader,bolLineItems,laborProcess,costSheetCalc,FOB" | fields $tokEnvironment$ | makemv $tokEnvironment$ delim="," | mvexpand $tokEnvironment$</query> </search> <change> <condition match="$label$==&quot;get&quot;"> <set token="inputToken">get</set> <set token="outputToken">get</set> <set token="inputToken2">b</set> <set token="outputToken2">b</set> <set token="inputToken3">c</set> <set token="outputToken3">c</set> <set token="inputToken4">d</set> <set token="outputToken4">d</set> <set token="inputToken5">e</set> <set token="outputToken5">e</set> <set token="inputToken4">d</set> <set token="outputToken4">d</set> <set token="inputToken3">3</set> <set token="outputToken3">3</set> <set token="apiToken">d</set> <set token="entityToken">get</set> </condition> <condition match="$label$==&quot;batch&quot;"> <set token="inputToken">batch</set> <set token="outputToken">batch</set> <set token="inputToken2">c</set> <set token="outputToken2">c</set> <set token="inputToken">b</set> <set token="outputToken4">b</set> <set token="inputToken3">d</set> <set token="outputToken3">d</set> <set token="apiToken">b</set> <set token="inputToken5">f</set> <set token="outputToken5">f</set> <set token="entityToken">batch</set> </condition> </condition> <condition match="$label$==&quot;Calc&quot;"> <set token="inputToken">Calc</set> <set token="outputToken">Calc</set> <set token="inputToken2">init</set> <set token="outputToken2">init</set> <set token="inputToken">Calc</set> <set token="outputToken4">Calc</set> <set token="inputToken3">d</set> <set token="outputToken3">d</set> <set token="apiToken">Calc</set> <set token="entityToken">Calc</set> </condition> </change> <default>get</default> </input> <input type="time" token="time_picker" searchWhenChanged="true"> <label>Time</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <html> <u1> </u1> </html> </panel> </row> <row> <panel> <title>Init Lambda</title> <table> <search> <query>index="" source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:info:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Init Lambda - Duplicate</title> <table> <search> <query>index="" source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:warning:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Init Lambda - Error</title> <table> <search> <query>index=""source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:error:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>