All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,   There is constant time diff (_indextime - _time) from few windows server as below, not sure what causing this and how to fix it  
Dear professional, I want to get the log size of each service in an index. This is my search string index="hcg_oapi_prod"| eval size = len(_raw) | stats sum(size) as rawSize by sourcetype | eval... See more...
Dear professional, I want to get the log size of each service in an index. This is my search string index="hcg_oapi_prod"| eval size = len(_raw) | stats sum(size) as rawSize by sourcetype | eval GB = round(rawSize / 1024 / 1024/1024, 2)   But this query string can not be completed and auto-canceled.   Please help me.
Hi all, We have a dashboard with Radial Gauge which is refreshing automatically every 2 minutes. When the dashboard is refreshed automatically the Radial Gauge dial and number are in the correc... See more...
Hi all, We have a dashboard with Radial Gauge which is refreshing automatically every 2 minutes. When the dashboard is refreshed automatically the Radial Gauge dial and number are in the correct centered position. However, the needle itself is shown in the starting position (upper left corner) aside of the Radial Gauge.   In this image, in the inspector on the right hand side, you can see that in "circle" the "cx" and "cy" coordinates are in the starting position. If we refresh the page manually or resize the window in any way the needle is moved in the correct position:  Here we can see the correct position of the needle which is also presented in the "Circle" coordinates "cx" and "cy" Does anyone encountered that issue before and how can it be fixed?
Hello, I have  signed up for my phantom us in order to get the ova and start testing. Unfortunately my account didn't get approved yet and it seems to be taking some time. Can someone from the suppor... See more...
Hello, I have  signed up for my phantom us in order to get the ova and start testing. Unfortunately my account didn't get approved yet and it seems to be taking some time. Can someone from the support look into it pls. Thank you
Team, index sourcetype=app_* some_search | rex "\[(?<transactionid>[A-Za-z0-9]+)\]" | rename transactionid as q|table q|format returns me ( ( q="100223608103" ) OR ( q="D202204021000676" ) ) ... See more...
Team, index sourcetype=app_* some_search | rex "\[(?<transactionid>[A-Za-z0-9]+)\]" | rename transactionid as q|table q|format returns me ( ( q="100223608103" ) OR ( q="D202204021000676" ) )   How do I get the below instead? ( ( "100223608103" ) OR ("D202204021000676" ) )   Thank you
Hi Team, Please help me out in this case. I am searching the Port Scanning attack attempts by the following query. index="firewall" | stats dc(destination_port) as pcount by source_ip | where pcou... See more...
Hi Team, Please help me out in this case. I am searching the Port Scanning attack attempts by the following query. index="firewall" | stats dc(destination_port) as pcount by source_ip | where pcount > 500 It Shows me the results in forms only like sorce_ip is 145.132.11.11 and p count 777. But I want the results in the form of  Sorce_ip      sorce_port     destination_ip      destnation_port      pcount So what will be the query in this regard? Waiting for your kind reply.
I have a fresh Splunk Cloud instance with the AWS Add on for AWS App installed. When I try to load the Analytics view I get a "You do not have permissions to access objects of user=sc_admin" error.... See more...
I have a fresh Splunk Cloud instance with the AWS Add on for AWS App installed. When I try to load the Analytics view I get a "You do not have permissions to access objects of user=sc_admin" error. I've given sc_admin all the privilege.   
hello I transpose events like this     | eval time=strftime(_time,"%H:%M") | sort time | fields - _time _span _origtime _events | transpose 0 header_field=time column_name=KPI include_empt... See more...
hello I transpose events like this     | eval time=strftime(_time,"%H:%M") | sort time | fields - _time _span _origtime _events | transpose 0 header_field=time column_name=KPI include_empty=true | rename "row1" as "7:00" | sort KPI      But I have a problem with my header_field Sometimes it works well because time field is well displayed : 7:00, 8:00, 9:00..... But sometimes (between 7:00 and 9:00 most of the times and I dont no why because after it works well), instead time fields, I have row1, row2, row3....   Is anybody have an idea concerning this issue I try a workaround with the rename of row1, row2...., but the rename doesn't works Could you help please?
Hello Team, I wanted to understand if Splunk captures data/logs generated from Glue jobs. If yes, can you share what all metrics are captured? How do they get captured? Also, if the metrics captur... See more...
Hello Team, I wanted to understand if Splunk captures data/logs generated from Glue jobs. If yes, can you share what all metrics are captured? How do they get captured? Also, if the metrics captured for Athena and Aurora could also be explained, it will be helpful. Thanks!
Hi Helpers - Below is my usecase where I am stuck with my ES upgrade.  My Splunk version recently upgraded from 7.2.7 to 8.1.3 Post the Splunk upgrade, Splunk ES views were throwing pop-up messages... See more...
Hi Helpers - Below is my usecase where I am stuck with my ES upgrade.  My Splunk version recently upgraded from 7.2.7 to 8.1.3 Post the Splunk upgrade, Splunk ES views were throwing pop-up messages “Timelines could not be loaded”. Splunk ES was on 4.5.2 which was working fine on Splunk 7.2.7. Since it looked incompatible, we planned to upgrade it to 6.2.0. Below is the process followed. It's on a SHC environment with 3 Search Heads   On ES Deployer, take backups of etc/shcluster/apps to etc/apps folders On ES Deployer, copied the apps (SA-*, DA-*, SplunkEnterpriseSecuritySuite) from etc/shcluster/apps to etc/apps folder Ran the upgrade command – (/opt/splunk/bin/splunk install app ./splunk-enterprise-security_620.spl -update 1) Ran the essinstall command as per the install documentation – (/opt/splunk/bin/splunk search '| essinstall --deployment_type shc_deployer' -auth admin:TelstraDR01 action=upgrade) – (Output attached) /opt/splunk/bin/splunk restart – (Multiple Invalid Stanzas and Output attached) Planning to replace all conf files from backup apps directories to the upgraded apps directories as we have noticed there is a change in the conf files. Not sure which ones to replace and the consequences – PENDING   Bit confused with the documentation. Upgrade documentation didn't have essinstall action=upgrade part. But read about it in some blog. Am I supposed to run it or not? When I followed the upgrade documentation, only SplunkEnterpriseSecuritySuite app folder got changed and the remaining SA-* and DA-* apps were unchanged. But SA-* and DA-* got changed when I ran essinstall command followed by splunk restart. All this is just on deployer. Haven't pushed any changes to search heads. Has anyone recently did ES upgrade and can share me clear steps to be followed? Raised a Splunk support case and they are advicing just to follow the upgrade doco which is fully not clear. Thanks & Regards, Naresh
  How to get details of Windows servers which are not activated or failed to activate Windows via KMS server? I would like to prep a dashboard which shows servers failed windows activation.
Hi everyone,  I am new to SPLUNK and I am trying to search for distinct IDs where its PRODUCT column does not include certain value. For example. If I assume I have the following table called TABLE... See more...
Hi everyone,  I am new to SPLUNK and I am trying to search for distinct IDs where its PRODUCT column does not include certain value. For example. If I assume I have the following table called TABLE1: ID PRODUCT PHONE 1 A 999999 2 A 888888 2 B 888888 1 C 999999 3 D 777777 3 C 777777 3 B 777777 4 B 666666 4 D 666666 5 A 555555 5 B 555555 5 D 555555 ... .... .....   What I want is the following output when I want to look for IDs where its Product column values does not equal C:   ID PHONE 2 888888 4 666666 5 555555 .... .....   How to write the search query in splunk?  pls help
I am stuck.  Have tried all of the options I have found.  Most come close, but cannot make it work.  I collect data from a CMDB that has field with a date I need to filter on, created_date.  What I ... See more...
I am stuck.  Have tried all of the options I have found.  Most come close, but cannot make it work.  I collect data from a CMDB that has field with a date I need to filter on, created_date.  What I am trying to accomplish: Generate a query for all events of the past 3 weeks where there are CMDB events that have a field "created_date" spanning multiple months, I only want those events that have a created_date that falls with the the 3 week period.   If I use the following query, it returns as expected all events within the three week period.  What I want are all events based on the created_date, not based on _time. BTW, created_date has a standard time output: "%Y-%m-%d %H:%M:%S"   index=cmdb dv_number=* dv_assigned_to=* dv_state=* created_date earliest=3w@w latest=@w | search [| inputlookup cmdb_users.csv| table dv_assigned_to ] | timechart span1w count(dv_number)   What I also tried, was converting the field created_date, to _time using the following, which turned created_date into epoch, but did produce the correct _time ouput, but cannot use earliest/latest since my understanding is earliest/latest only work on the initial search.    index=cmdb dv_number=* dv_assigned_to=* dv_state=* created_date earliest=3w@w latest=@w | search [| inputlookup cmdb_users.csv| table dv_assigned_to ] | eval created_date=strptime(created_date,"%Y-%m-%d %H:%M:%S") | eval _time=created_date ........ ..... ..   I also tried using a where statement, which partially worked, but would only cover the outer boundary (3weeks), not the inner boundary of the end of the last week. | where created_date <= relative_time(now(), "-3w@w") AND created_date >= relative_time(now()), "@w")  
Hi Team, Is it possible to onboard the salesforce data using the HEC methodology? Thanks, Dibeena
Hi, I’m using the Event Timeline viz to create a timeline. The visualisation works when its a single panel in a dashboard. However, I need this timeline visualisation to work in a dashboard with a ... See more...
Hi, I’m using the Event Timeline viz to create a timeline. The visualisation works when its a single panel in a dashboard. However, I need this timeline visualisation to work in a dashboard with a drilldown from other panels. The functionality of this timeline works as expected on this drilldown dashboard…….EXCEPT the time axis is not labelled. Same queey and options as the standalone dashboard,same data used,same panel settings…..but time axis is labelled in the standalone dashboard but not in the dashboard I actually need it to work,the drill down dashboard. What could be the reason for this?Are the bins needing to be a certain form for this viz perhaps?Any way to force this viz to show the time axis? Thanks, Patrick
In a log if there are two similar words with different value , how to retrieve value of second word using regex ? Example: "Display details of value =abc and value=def for id=1". how to display val... See more...
In a log if there are two similar words with different value , how to retrieve value of second word using regex ? Example: "Display details of value =abc and value=def for id=1". how to display value "def" ?   index=* "Letters" |rex field=_raw max_match=0 "value=?(?<value2>[^\n]*)" |stats values(value2) as letter by id   Above query returns 1     "abc and value=def"  
Get data from Universal Forwarder, but 100MB data takes an hour Do you have any settings to speed up?
Can not find main app search
Can you please point me to the start up screen , where I can start a new search.
I've seen this on some older posts, but I am currently battling this issue. For some hosts, restarting it makes the logs start flowing again without the above error message (Suggesting a delayed star... See more...
I've seen this on some older posts, but I am currently battling this issue. For some hosts, restarting it makes the logs start flowing again without the above error message (Suggesting a delayed start is the answer). But on some of them, a restart does nothing, there is real security logs that Splunk is merely reporting above error message for.