All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello team, Were there any announcements about latest PHP version support by AppDynamics? Currently this version does not pass requirements check Thank you in advance
I know this is now what Splunk is, but since we have so much of our current monitoring built into it, I wanted to see if I can just add this also. I am looking to create a dashboard that a user pull... See more...
I know this is now what Splunk is, but since we have so much of our current monitoring built into it, I wanted to see if I can just add this also. I am looking to create a dashboard that a user pulls up and provides values for certain inputs, takes the values, and produces a result based on a pre-defined algorythm. For (an extremely simple) example: User has to enter: Number of Cars Number of packages Number of People And then there is a formula we have stored somewhere that takes the inputs, weights them, and produces a result (i.e. - "take the blue ferry" vs. "take the white ferry"). It would be nice if we could do this, even though it's very simple, since we're asking our users to spend more time in the tool. TIA!
facing this issue second time, and tried almost every possible way out in last 2 months, so here is the csv file we're which is getting referesh in every 1 hour, ( it may or may not contain new event... See more...
facing this issue second time, and tried almost every possible way out in last 2 months, so here is the csv file we're which is getting referesh in every 1 hour, ( it may or may not contain new events ) We observed after few hours file stop getting into splunk and after splunk restart again it start ingesting data. In the splunkd logs its says ignoring path, I have tried crcSalt, initCrcLenth but non worked in my case All i want splunk is to read new file always no matter is there is new events or not, just stay updated with file ( i cannot add counter in file )
Greetings fellow Splunkers, I was wondering if anyone has figured out what seems the most accurate metric to track when a user logs into windows. not the boot up/Startup time but the time between wh... See more...
Greetings fellow Splunkers, I was wondering if anyone has figured out what seems the most accurate metric to track when a user logs into windows. not the boot up/Startup time but the time between when a user puts in their password and they are able to interact with the desktop. I am not able to see a particular event. Waiting for GPO to complete is not viable since we stream them in the background.  Comparing events between local and AD events might prove useful but we have a significant amount of users that are WFH and they use cached creds until they get on the VPN. Comparing the login between them getting on the VPN would be simpler to get but if they do anything else before they log into the VPN that will throw it off as well..  Appreciate anything thoughts or ideas you fine folks might have. Thank you!
Hi comunity, I have been using a very nice 3d scatterplot in a dashboard in Splunk cloud (8.2). It was working fine.   Now the visualization is broken. It seems like it has been removed. ... See more...
Hi comunity, I have been using a very nice 3d scatterplot in a dashboard in Splunk cloud (8.2). It was working fine.   Now the visualization is broken. It seems like it has been removed. Can someone please confirm if that is the case? Do we have any alternative for it or knows any workaround to have it working? Thanks in advance for any help.  
Hi Splunk community, I am currently having an issue with deploying apps to universal forwards.  On the deployment server side, I have the hosts set up in the whitelist for specific server classes a... See more...
Hi Splunk community, I am currently having an issue with deploying apps to universal forwards.  On the deployment server side, I have the hosts set up in the whitelist for specific server classes and on the UF side, I have a deployment client on the hosts plus they are phoning home into the DS.  We are not receiving logs from these UFs because the app that contains the input.conf for these servers is not getting pushed to the UFs.  Is there a way to force the app to get pushed / am I missing a configuration that is causing this to happen? This has been a recurring problem because the app sporadically gets removed from these servers. Thanks in advance!  
I have a set of results for the search with id="base_metrics_search" which provide 3 panels with data.  The events each contain a bunch of question metrics and have two fields of note: question_id an... See more...
I have a set of results for the search with id="base_metrics_search" which provide 3 panels with data.  The events each contain a bunch of question metrics and have two fields of note: question_id and is_answered, which I'd like to use to provide data for another 2 panels. An example result set would be: question_id is_answered 1 1 2 0 3 1 4 0 5 0 6 1 7 1 8 1 9 0 10 0 11 1 12 0 13 0 14 1 15 1   How do I find the ids of the first 5 answered and unanswered questions? The first 5 of each type could be in any order. I am hoping to use two tokens to pass these values to other panels using a multivalue or comma separated list. So for the above example, I would end up with something like: answered_ids = 1,3,6,7,8 unanswered_ids = 2,4,5,9,10 I have searched around the doc and I haven't figured out what SPL to use to do this. I am currently using a chained search approach using "head" but this gives me results in the first panel but none in the second:  (I'm using splunk enterprise 8.2.2) <panel> <title>Top Questions</title> <table> <title>Answered Questions</title> <search base="general_metrics_base"> <query>| head limit=5 (is_answered=1) | fields ...</query> </search> <option name="drilldown">none</option> <option name="refresh.display">none</option> </table> <table> <title>Unanswered Questions</title> <search base="general_metrics_base"> <query>| head limit=5 (is_answered=0) | fields ...</query> </search> <option name="drilldown">none</option> <option name="refresh.display">none</option> </table> </panel> I'm looking into passing just the question_Ids on since I need to do further querying in those next two panels anyway.  I assume the answered questions search gets rid of events in the base_metrics_search results preventing the unanswered questions panel search from using them. Should the second panel (for the unanswered questions) of a pair of panels, both in a chained search based off the same original search, get the same original results set to process that the base_metrics_search returned to the first panel? Thanks in advance for any help you could offer!
I want to create the new_field when other values of field_1 is less than of first value. Here in below example as 23 greater than other values then do sum of all which is 44. for 10 is the sma... See more...
I want to create the new_field when other values of field_1 is less than of first value. Here in below example as 23 greater than other values then do sum of all which is 44. for 10 is the smallest then its result is 10 for 11 there is one more value is less which is 10 then do sum with 10 then result is 21
When i have query data  result from search in field worker id it show >> domain\worker_id search result Example  ABC\123456  it have domain name front of worker id. If i would like to deleted on... See more...
When i have query data  result from search in field worker id it show >> domain\worker_id search result Example  ABC\123456  it have domain name front of worker id. If i would like to deleted only domain  ABC\  in field and result  show only number of worker id  . Example    ABC\123456   >>> 123456 Please recommend  how to query in search. Best Regards, CR
hello all, My problem is I thing Splunk have max character accepted for stats command, when i perform this search index="bnc_6261_pr_log_conf"  logStreamName="*b6b3-f8d14815eaf8/i-09bfc06d1ff10... See more...
hello all, My problem is I thing Splunk have max character accepted for stats command, when i perform this search index="bnc_6261_pr_log_conf"  logStreamName="*b6b3-f8d14815eaf8/i-09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout" I see 3 event, and now if I perform this request index="bnc_6261_pr_log_conf"  logStreamName="*b6b3-f8d14815eaf8/i-09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout" | eval l = len(message) | stats values(l) as NumberOfCar I received two len, one was lose 172 6277   if I perform this statistic request : index="bnc_6261_pr_log_conf" | logStreamName="*b6b3-f8d14815eaf8/i-09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout" | eval length=len(_raw) | stats max(length) perc95(length) max(linecount) perc95(linecount) recived: Max(Length):29886 perc95(Length):275756   The event I lose have effectively 28973 character,  I thing the actual limit is 10 000.... I already change TRUNCATE parameter at 80 000. It for that I can load event with up to 10 000 character....   My question is, Can I change the stats  limit in splunk for the max characters ? with which parameter ? and where from the web page ? can be change by non admin  and for a  specific source ?   Thank for your future help. Hugues
I want to move my .Net 6 API to a Graviton instance with arm64. Does the agent support it? I don't see any docs about it. I use init container instrumentation, but I see only an amd64 docker image ... See more...
I want to move my .Net 6 API to a Graviton instance with arm64. Does the agent support it? I don't see any docs about it. I use init container instrumentation, but I see only an amd64 docker image for Alpine: \ Is there any other instrumentation process that supports arm64?
Hello, I'm struggling to convert two status codes (200 and 400) from ms to secs and display the values in a line chart. tmdEvntMs is the API response time in ms, and httpStatus is my status codes... See more...
Hello, I'm struggling to convert two status codes (200 and 400) from ms to secs and display the values in a line chart. tmdEvntMs is the API response time in ms, and httpStatus is my status codes. I tried using foreach, and  it just converts the response time back to ms.  timechart span=6h avg(tmdEvntMs) AS avg_response by httpStatus | foreach * [eval avg_response=round(avg_response/1000, 2)]   Any suggestions would be greatly appreciated. Thank you
Greetings fellow Splunkers, I was wondering if anyone has figured out what seems the most accurate metric to track when a user logs into windows. not the boot up/Startup time but the time between w... See more...
Greetings fellow Splunkers, I was wondering if anyone has figured out what seems the most accurate metric to track when a user logs into windows. not the boot up/Startup time but the time between when a user puts in their password and they are able to interact with the desktop. I am not able to see a particular event. Waiting for GPO to complete is not viable since we stream them in the background.  Comparing events between local and AD events might prove useful but we have a significant amount of users that are WFH and they use cached creds until they get on the VPN. Comparing the login between them getting on the VPN would be simpler to get but if they do anything else before they log into the VPN that will throw it off as well..  Appreciate anything thoughts or ideas you fine folks might have. Thank you!
Hello fellow Splunkers. I need a little help with an issue I am having with one of my dashboards.  Im sure its a simple fix but am having a tough time figuring out the correct way to do it.  A little... See more...
Hello fellow Splunkers. I need a little help with an issue I am having with one of my dashboards.  Im sure its a simple fix but am having a tough time figuring out the correct way to do it.  A little background, we created a dashboard to check the status of connected forwarders for our auditing purposes. With the new infrastructure we have a VDI setup that spins up a new hostname when a new user logs in.  This results in our dashboard having a bunch of different forwarders showing as offline. I want to add in to the search to get everything that hasnt reported into the dashboard in the past 5 days to not be pulled into the chart. the search string is below, the time I am would like to filter off of is the last_phone_home. Thanks for any help you can provide!   | inputlookup hosts.csv | table * | join max=0 [| rest splunk_server=local /services/deployment/server/clients | fields - applications.* serverClasses.* eai* splunk_server author id title | collect index=summary addtime=true marker="dataset=deployment_server_clients" | eval diff=now()-lastPhoneHomeTime | eval status=if(diff&gt;120, "Connection Failed", "Connection Successful") | rename hostname as host] | rename utsname as platform | eval last_phone_home=strftime(lastPhoneHomeTime, "%F - %T") | eval hostname=lower(hostname) | eval last_hourly_check=strftime(last_hourly_check, "%F - %T") | table host platform ip splunkVersion last_phone_home status | sort status | dedup host
Hi, I'm using the following search string in Infoblox reporting:     sourcetype=ib:audit index=ib_audit | sort -_time | rename TIMESTAMP as "Timestamp", ADMIN as "Admin", ACTION as "Action"... See more...
Hi, I'm using the following search string in Infoblox reporting:     sourcetype=ib:audit index=ib_audit | sort -_time | rename TIMESTAMP as "Timestamp", ADMIN as "Admin", ACTION as "Action", OBJECT_TYPE as "Object Type", OBJECT_NAME as "Object Name", EXEC_STATUS as "Execution Status", MESSAGE as "Message", host as "Member" | search Admin=* Action=Created OR Action=Deleted "Object Type"="IPv4 Network Container" OR "Object Type"="IPv4 Network" | fields + Action, Admin, Member, "Object Name", "Object Type", "Comment" Timestamp |fields - _raw, _time     This search is to alert on new network or network containers created via the audit log. What I would like to do in addition to this, is pull in the comment from the network, which looks like this from the splunk search: 2022-10-03 15:00:23.984Z [guestrw]: Created Network 192.168.100.0/24 network_view=default extensible_attributes=[[name="Building",value="B2"]],address="192.168.100.0",auto_create_reversezone=False,cidr=24,comment="DDIguy Reporting test",common_properties=[domain_name_servers=[],routers=[]],disabled=False,discovery_member=NULL,enable_discovery=False,enable_immediate_discovery=False,network_view=NetworkView:default,use_basic_polling_settings=False,use_member_enable_discovery=False "commentDDIGUY Reporting test"  Can someone please help me understand how I can pull that into the first search query?   
Hi, when i am checking the health check dashboard i saw  error in indexing ready. Indexing Ready Root Cause: Cluster is not indexing ready, please bring up at least RF number of peers. Unhealt... See more...
Hi, when i am checking the health check dashboard i saw  error in indexing ready. Indexing Ready Root Cause: Cluster is not indexing ready, please bring up at least RF number of peers. Unhealthy Instance: xxxxx Last 50 related messages: None Note: the instance is not active instance. But when i check SF and RF are met it is showing   How to resolve this issue????
I'm a bit confused. If I have accelerated datamodels and upgrade CIM version and the update adds new fields in datamodels... What then? Will my datamodels keep at old definition version since the... See more...
I'm a bit confused. If I have accelerated datamodels and upgrade CIM version and the update adds new fields in datamodels... What then? Will my datamodels keep at old definition version since they are accelerated and you can't edit accelerated datamodels? Will I have to rebuild my accelerations from scratch? That could be a bit... unfotunate since my summaries are huge.
Hello Splunkers, I have a small question, what is the best practice (or for what reasons) should I use Syslog or TCP configuration inside the ouputs.conf file ? Both TCP and Syslog can forward dat... See more...
Hello Splunkers, I have a small question, what is the best practice (or for what reasons) should I use Syslog or TCP configuration inside the ouputs.conf file ? Both TCP and Syslog can forward data right ? What is the benefit of each possibility ? https://docs.splunk.com/Documentation/Splunk/latest/Admin/outputsconf#TCPOUT_SETTINGS https://docs.splunk.com/Documentation/Splunk/latest/Admin/outputsconf#Syslog_output---- I'm trying to forward logs from a HF to another HF (and I have multiple types of logs) Thanks a lot, GaetanVP
Hello How can I change the owner of the alert in alert manager action ?  I have only unassigned 
Hi, I am b/t a rock and a wall, looking for any suggestion to solved this.   I am using the URL ToolBox to dissect URI "ut_path" into fields separated by "/" characters. For instance >>>      i... See more...
Hi, I am b/t a rock and a wall, looking for any suggestion to solved this.   I am using the URL ToolBox to dissect URI "ut_path" into fields separated by "/" characters. For instance >>>      index=foo sourcetype="bar" Requested_URI=* | lookup ut_parse_simple_lookup url AS Requested_URI | fields ut_* Requested_URI User_ID | table User_ID RequestUri ut_scheme, ut_netloc, ut_path, ut_query, ut_fragment, ut_params ut_path = /a1/f1/f2/f3/4/5 ut_path = /a1/f1/f2 ut_path = /a1/f1/f2/f3/f4 ut_path = /a1/f1/f2/f3       The "ut_path" field has different value paths of varying length, each section (like f1) needs to get extracted into a new field so that I can run stats on it.    Is there a way to auto-extract dynamically, or conditionally? Thank you!