All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have ingested data form influx DB to Splunk Enterprise using influxDB add from splunk db connect. Performing InfluxQL search in SQL explorer of created influx connection. I am getting empty values... See more...
I have ingested data form influx DB to Splunk Enterprise using influxDB add from splunk db connect. Performing InfluxQL search in SQL explorer of created influx connection. I am getting empty values for value column. Query: from(bucket: "buckerName") |> range(start: -6h) |> filter(fn: (r) => r._measurement == "NameOfMeasurement") |>filter(fn: (r) => r._field == "value") |> yield(name: "count")     Splunk DBX Add-on for InfluxDB JDBC 
Up
初歩的な質問で失礼いたします。弊社ではPoCとして、Splunk Enterprise Trial Licenseをご提供いただき、まずはpalo altoのログを取り込んで、(メール等で)アラートを発報させたいと思っていますが、どのようにすればいいかわかりません。(手動で過去のログを取り込むことはできましたが、過去のログに対してアラートは出せないですよね。日付を現在にすれば出るのでしょうか。ま... See more...
初歩的な質問で失礼いたします。弊社ではPoCとして、Splunk Enterprise Trial Licenseをご提供いただき、まずはpalo altoのログを取り込んで、(メール等で)アラートを発報させたいと思っていますが、どのようにすればいいかわかりません。(手動で過去のログを取り込むことはできましたが、過去のログに対してアラートは出せないですよね。日付を現在にすれば出るのでしょうか。また出し方もわかっておりませんが。。) 環境はFJCloudに仮想サーバを1台立てて、そこでSplunkを動かしていますが、他のサーバにForwarderを入れたりなどはしていないです。 どなたかご存知の方、教えていただければ幸甚です。よろしくお願いいたします。
Hi @Poojitha  You Can Add Multiple Tokens in the Same Configuration Page ! Please refer the Image that i am Attaching,  is this what you are looking for ??
Hello @somesoni2 what a great idea to name it same way and using upper/lower case to make them different between eventtype & EventType... 
Hi @gcusello ,   Thanks for the feedback.. Wanted to understand , if we are changing OS on first physical node after restoring the backup. So this node will not be running on Red hat but other 3 s... See more...
Hi @gcusello ,   Thanks for the feedback.. Wanted to understand , if we are changing OS on first physical node after restoring the backup. So this node will not be running on Red hat but other 3 still running on centos. Will this node be part of clustering? Can servers with different OS part of same cluster? Thanks
Try something like this | streamstats count by ReasonCode EquipmentName reset_on_change=t global=f | where count=1
I migrated to v9.1.5 and have the TA-XLS app installed and working from a v7.3.6.  Commanding an 'outputxls' will generate a 'cannot concat str to bytes' error for the following line of the outputxl... See more...
I migrated to v9.1.5 and have the TA-XLS app installed and working from a v7.3.6.  Commanding an 'outputxls' will generate a 'cannot concat str to bytes' error for the following line of the outputxls.py file in the app:  try: csv_to_xls(os.environ['SPLUNK_HOME'] + "/etc/apps/app_name/appserver/static/fileXLS/" + output)Tried encoding by appending  .encode(encode('utf-8') to the string -> not working Tried importing the SIX and FUTURIZE/MODERNIZE libraries and ran the code to "upgrade" the script: it just added the and changed a line --> not working  from __future__ import absolute_import   Tried to define each variable, and some other --> not working  splunk_home = os.environ['SPLUNK_HOME'] static_path = '/etc/apps/app_name/appserver/static/fileXLS/' output_bytes = output csv_to_xls((splunk_home + static_path.encode(encoding='utf-8') + output))   I sort of rely on this app to work, any kind of help is needed! Thanks!            
@mrilvan there is only a Splunk app for that at the moment and nothing on the SOAR Side. However if the API is available there is nothing stopping you building a custom app in the platform as I am su... See more...
@mrilvan there is only a Splunk app for that at the moment and nothing on the SOAR Side. However if the API is available there is nothing stopping you building a custom app in the platform as I am sure XSOAR is just another REST API.  
You appear to be missing part of the answer - hot and warm buckets are normally stored on expensive fast storage, whereas (in order to reduce costs) cold buckets are stored on cheaper slower storage.... See more...
You appear to be missing part of the answer - hot and warm buckets are normally stored on expensive fast storage, whereas (in order to reduce costs) cold buckets are stored on cheaper slower storage. Using these distinctions, Splunk gives organisations the flexibility to manage the cost of their storage infrastructure.
Thanks, Its worked
  Hi Splunk Community, I’ve generated self-signed SSL certificates and configured them in web.conf, but they don't seem to be taking effect. Additionally, I am receiving the following warning messa... See more...
  Hi Splunk Community, I’ve generated self-signed SSL certificates and configured them in web.conf, but they don't seem to be taking effect. Additionally, I am receiving the following warning message when starting Splunk:   WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Could someone please help me resolve this issue? I want to ensure that Splunk uses the correct SSL certificates and that the hostname validation works properly.
How to see the preview of Splunk AI APP, We already accept terms and conditions but still we didn't get any mail notification to install the Splunk AI APP from Splunk Support.  
I don’t have any field extraction called delta_time it was created with the eval command. I tried searching all configurations and all permissions seem to be set correctly   
You mean something like this?   | eval date = strftime(_time, "%F") | stats min(_time) as start max(_time) as end by date | eval duration = round(end - start) | fields - start end   date dura... See more...
You mean something like this?   | eval date = strftime(_time, "%F") | stats min(_time) as start max(_time) as end by date | eval duration = round(end - start) | fields - start end   date duration 2024-10-04 61267 2024-10-05 8 Here is the emulation   | makeresults format=csv data="jobId, date, skip1, skip2, time Job1, 10/4/2024, 20241004, 10/4/2024, 0:38:27 Job1, 10/4/2024, 20241004, 10/4/2024, 0:38:41 Job 2, 10/4/2024, 20241004, 10/4/2024, 17:39:12 Job 2, 10/4/2024, 20241004, 10/4/2024, 17:39:24 Job 2, 10/4/2024, 20241004, 10/4/2024, 17:39:34 Job1, 10/5/2024, 20241004, 10/4/2024, 0:38:27 Job1, 10/5/2024, 20241004, 10/4/2024, 0:38:35" | eval _time = strptime(date . " " . time, "%m/%d/%Y %H:%M:%S") ``` data emulation above ```  
Splunk Enterprise Version: 9.2.0.1 OpenShift Version: 4.14.30   We used to have Openshift Event logs coming in under sourcetype openshift_events under index=openshift_generic_logs   However, sta... See more...
Splunk Enterprise Version: 9.2.0.1 OpenShift Version: 4.14.30   We used to have Openshift Event logs coming in under sourcetype openshift_events under index=openshift_generic_logs   However, starting Sept 29, we suddenly did not receive any logs from that index and sourcetype. The Splunkforwarders are still running and we did not do any changes to the configuration. Here is the addon.conf that we have:       004-addon.conf [general] # addons can be run in parallel with agents addon = true [input.kubernetes_events] # disable collecting kubernetes events disabled = false # override type type = openshift_events # specify Splunk index index = # (obsolete, depends on kubernetes timeout) # Set the timeout for how long request to watch events going to hang reading. # eventsWatchTimeout = 30m # (obsolete, depends on kubernetes timeout) # Ignore events last seen later that this duration. # eventsTTL = 12h # set output (splunk or devnull, default is [general]defaultOutput) output = # exclude managed fields from the metadata excludeManagedFields = true [input.kubernetes_watch::pods] # disable events disabled = false # Set the timeout for how often watch request should refresh the whole list refresh = 10m apiVersion = v1 kind = pod namespace = # override type type = openshift_objects # specify Splunk index index = # set output (splunk or devnull, default is [general]defaultOutput) output = # exclude managed fields from the metadata excludeManagedFields = true       Apologies if I'm missing something obvious here.   Thank you!
Hello @abow Can you check this article : https://splunk.my.site.com/customer/s/article/How-to-make-Splunk-Add-on-for-AWS-to-fetch-data-via-cross-account-configuration ? hope fully it will resolve you... See more...
Hello @abow Can you check this article : https://splunk.my.site.com/customer/s/article/How-to-make-Splunk-Add-on-for-AWS-to-fetch-data-via-cross-account-configuration ? hope fully it will resolve you queries.
Hi @Tiong.Koh , I apologize for the delay in my response. I reviewed this limitation and found that an idea request was previously submitted regarding this, but, it was rejected due to concern of... See more...
Hi @Tiong.Koh , I apologize for the delay in my response. I reviewed this limitation and found that an idea request was previously submitted regarding this, but, it was rejected due to concern of system performance. Additionally, the 32,767 character limit was deemed sufficient for SQL analysis at the time. Regarding the character limit, the current maximum is 32,767 characters, including white spaces. If this limitation is critical to your business processes, I recommend reaching out to your account manager or sales reps with business justification so that they can discuss with our product manager.   Regards, Martina
Hi @ilhwan  > but I don't see a magnifying glass on any of the panels Pls mouse over on the panel to the lower right corner of the panel. then you can see the magnifying glass.    For example, on... See more...
Hi @ilhwan  > but I don't see a magnifying glass on any of the panels Pls mouse over on the panel to the lower right corner of the panel. then you can see the magnifying glass.    For example, on DMC, Indexing--->Indexes and Volumes ---> Indexes and Volumes: Instance got this panel.  when i mouse over, then only the magnifying glass appears.   
Maybe I should rephrase my question to this: Why can't hot bucket roll straight to cold bucket? I get that hot bucket is actively getting written to which is why I said in my post that that's w... See more...
Maybe I should rephrase my question to this: Why can't hot bucket roll straight to cold bucket? I get that hot bucket is actively getting written to which is why I said in my post that that's what I'm thinking is why there has to be warm buckets in the first place, but all I've been told so far is that hot bucket is actively being updated and warm bucket is not which, I'm afraid, doesn't exactly answer the above question.