All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to move my .Net 6 API to a Graviton instance with arm64. Does the agent support it? I don't see any docs about it. I use init container instrumentation, but I see only an amd64 docker image ... See more...
I want to move my .Net 6 API to a Graviton instance with arm64. Does the agent support it? I don't see any docs about it. I use init container instrumentation, but I see only an amd64 docker image for Alpine: \ Is there any other instrumentation process that supports arm64?
Hello, I'm struggling to convert two status codes (200 and 400) from ms to secs and display the values in a line chart. tmdEvntMs is the API response time in ms, and httpStatus is my status codes... See more...
Hello, I'm struggling to convert two status codes (200 and 400) from ms to secs and display the values in a line chart. tmdEvntMs is the API response time in ms, and httpStatus is my status codes. I tried using foreach, and  it just converts the response time back to ms.  timechart span=6h avg(tmdEvntMs) AS avg_response by httpStatus | foreach * [eval avg_response=round(avg_response/1000, 2)]   Any suggestions would be greatly appreciated. Thank you
Greetings fellow Splunkers, I was wondering if anyone has figured out what seems the most accurate metric to track when a user logs into windows. not the boot up/Startup time but the time between w... See more...
Greetings fellow Splunkers, I was wondering if anyone has figured out what seems the most accurate metric to track when a user logs into windows. not the boot up/Startup time but the time between when a user puts in their password and they are able to interact with the desktop. I am not able to see a particular event. Waiting for GPO to complete is not viable since we stream them in the background.  Comparing events between local and AD events might prove useful but we have a significant amount of users that are WFH and they use cached creds until they get on the VPN. Comparing the login between them getting on the VPN would be simpler to get but if they do anything else before they log into the VPN that will throw it off as well..  Appreciate anything thoughts or ideas you fine folks might have. Thank you!
Hello fellow Splunkers. I need a little help with an issue I am having with one of my dashboards.  Im sure its a simple fix but am having a tough time figuring out the correct way to do it.  A little... See more...
Hello fellow Splunkers. I need a little help with an issue I am having with one of my dashboards.  Im sure its a simple fix but am having a tough time figuring out the correct way to do it.  A little background, we created a dashboard to check the status of connected forwarders for our auditing purposes. With the new infrastructure we have a VDI setup that spins up a new hostname when a new user logs in.  This results in our dashboard having a bunch of different forwarders showing as offline. I want to add in to the search to get everything that hasnt reported into the dashboard in the past 5 days to not be pulled into the chart. the search string is below, the time I am would like to filter off of is the last_phone_home. Thanks for any help you can provide!   | inputlookup hosts.csv | table * | join max=0 [| rest splunk_server=local /services/deployment/server/clients | fields - applications.* serverClasses.* eai* splunk_server author id title | collect index=summary addtime=true marker="dataset=deployment_server_clients" | eval diff=now()-lastPhoneHomeTime | eval status=if(diff>120, "Connection Failed", "Connection Successful") | rename hostname as host] | rename utsname as platform | eval last_phone_home=strftime(lastPhoneHomeTime, "%F - %T") | eval hostname=lower(hostname) | eval last_hourly_check=strftime(last_hourly_check, "%F - %T") | table host platform ip splunkVersion last_phone_home status | sort status | dedup host
Hi, I'm using the following search string in Infoblox reporting:     sourcetype=ib:audit index=ib_audit | sort -_time | rename TIMESTAMP as "Timestamp", ADMIN as "Admin", ACTION as "Action"... See more...
Hi, I'm using the following search string in Infoblox reporting:     sourcetype=ib:audit index=ib_audit | sort -_time | rename TIMESTAMP as "Timestamp", ADMIN as "Admin", ACTION as "Action", OBJECT_TYPE as "Object Type", OBJECT_NAME as "Object Name", EXEC_STATUS as "Execution Status", MESSAGE as "Message", host as "Member" | search Admin=* Action=Created OR Action=Deleted "Object Type"="IPv4 Network Container" OR "Object Type"="IPv4 Network" | fields + Action, Admin, Member, "Object Name", "Object Type", "Comment" Timestamp |fields - _raw, _time     This search is to alert on new network or network containers created via the audit log. What I would like to do in addition to this, is pull in the comment from the network, which looks like this from the splunk search: 2022-10-03 15:00:23.984Z [guestrw]: Created Network 192.168.100.0/24 network_view=default extensible_attributes=[[name="Building",value="B2"]],address="192.168.100.0",auto_create_reversezone=False,cidr=24,comment="DDIguy Reporting test",common_properties=[domain_name_servers=[],routers=[]],disabled=False,discovery_member=NULL,enable_discovery=False,enable_immediate_discovery=False,network_view=NetworkView:default,use_basic_polling_settings=False,use_member_enable_discovery=False "commentDDIGUY Reporting test"  Can someone please help me understand how I can pull that into the first search query?   
Hi, when i am checking the health check dashboard i saw  error in indexing ready. Indexing Ready Root Cause: Cluster is not indexing ready, please bring up at least RF number of peers. Unhealt... See more...
Hi, when i am checking the health check dashboard i saw  error in indexing ready. Indexing Ready Root Cause: Cluster is not indexing ready, please bring up at least RF number of peers. Unhealthy Instance: xxxxx Last 50 related messages: None Note: the instance is not active instance. But when i check SF and RF are met it is showing   How to resolve this issue????
I'm a bit confused. If I have accelerated datamodels and upgrade CIM version and the update adds new fields in datamodels... What then? Will my datamodels keep at old definition version since the... See more...
I'm a bit confused. If I have accelerated datamodels and upgrade CIM version and the update adds new fields in datamodels... What then? Will my datamodels keep at old definition version since they are accelerated and you can't edit accelerated datamodels? Will I have to rebuild my accelerations from scratch? That could be a bit... unfotunate since my summaries are huge.
Hello Splunkers, I have a small question, what is the best practice (or for what reasons) should I use Syslog or TCP configuration inside the ouputs.conf file ? Both TCP and Syslog can forward dat... See more...
Hello Splunkers, I have a small question, what is the best practice (or for what reasons) should I use Syslog or TCP configuration inside the ouputs.conf file ? Both TCP and Syslog can forward data right ? What is the benefit of each possibility ? https://docs.splunk.com/Documentation/Splunk/latest/Admin/outputsconf#TCPOUT_SETTINGS https://docs.splunk.com/Documentation/Splunk/latest/Admin/outputsconf#Syslog_output---- I'm trying to forward logs from a HF to another HF (and I have multiple types of logs) Thanks a lot, GaetanVP
Hello How can I change the owner of the alert in alert manager action ?  I have only unassigned 
Hi, I am b/t a rock and a wall, looking for any suggestion to solved this.   I am using the URL ToolBox to dissect URI "ut_path" into fields separated by "/" characters. For instance >>>      i... See more...
Hi, I am b/t a rock and a wall, looking for any suggestion to solved this.   I am using the URL ToolBox to dissect URI "ut_path" into fields separated by "/" characters. For instance >>>      index=foo sourcetype="bar" Requested_URI=* | lookup ut_parse_simple_lookup url AS Requested_URI | fields ut_* Requested_URI User_ID | table User_ID RequestUri ut_scheme, ut_netloc, ut_path, ut_query, ut_fragment, ut_params ut_path = /a1/f1/f2/f3/4/5 ut_path = /a1/f1/f2 ut_path = /a1/f1/f2/f3/f4 ut_path = /a1/f1/f2/f3       The "ut_path" field has different value paths of varying length, each section (like f1) needs to get extracted into a new field so that I can run stats on it.    Is there a way to auto-extract dynamically, or conditionally? Thank you!
Hi all, in <format type = "color"> can I also insert the row attribute? I need to color a table cell in the dashboard if the value of a row exceeds the target value. I tried with: <colorPalette ... See more...
Hi all, in <format type = "color"> can I also insert the row attribute? I need to color a table cell in the dashboard if the value of a row exceeds the target value. I tried with: <colorPalette type = "expression"> if (value & gt; 77, "# FA8072", "") </colorPalette> but color all the rows. My table: WO         September    October     November    December ..... A                     80                  77                       84                   46 B                    88                   23                      88                     26 C                    55                   34                      32                     93 I have the targets of A, B, C respectively : 60,70,80 they must color the row A: September, October, November B: September, november C: December Is it possible otherwise to insert a condition that affects the lines in the expression? Tks Greetings Antonio
I am working in clustered environment and getting data from HEC.  I want to list out indexes which are receiving HEC data and the data owners.
Was working on Splunk Enterprise SDK for Javascript and used the splunkjs.UI.Charting.Chart class to create a chart and embed it in the web page. But it throwed an error "cannot  access property of u... See more...
Was working on Splunk Enterprise SDK for Javascript and used the splunkjs.UI.Charting.Chart class to create a chart and embed it in the web page. But it throwed an error "cannot  access property of undefined; accessing Chart". Is there any way to solve it. Thanks in advance for the help. I have attached the error screenshots and the code block.  
Hi all, I am trying to feed results of a query into another of a different time and index and I'm facing issues with this. Context: I want to look for any user activity across my servers on d+1 for ... See more...
Hi all, I am trying to feed results of a query into another of a different time and index and I'm facing issues with this. Context: I want to look for any user activity across my servers on d+1 for list of user accounts which shows up as disabled on the active directory (windows event code=4725). From the search query below, I want to parse the list of usernames where count=1 and look for any user activity on d+1 onwards after earliest(_time) is recorded.  index=useractivitylogs [search index=wineventlog EventCode=4725 | eval timestamp=strftime(_time, "%Y-%m-%dT%H:%M"%S") | stats count as count, earliest(timestamp) by username | where count=1] Example: Eventcode 4725 is recorded for these 2 users based on my inner search: Timestamp | User: 5 September 2022 | Anna 10 September 2022 | Betty  Then, I want to feed these results to identify any user activity found on any servers on d+1 after the recorded Timestamp.   Thank you.   
Hello! I have the last months been working on automating adding/deleting new/old databases through a cmdb database.  It's working pretty well and maybe one day i hope to make it available for t... See more...
Hello! I have the last months been working on automating adding/deleting new/old databases through a cmdb database.  It's working pretty well and maybe one day i hope to make it available for the community.  But last week I ran into some trouble and I can not find any mentioning of this problem.  This project has a lot of oracle databases, somewhat 700 or so. And it turns out some of them contain an underscore (_) in the middle of their SID.  And i noticed that all of these fail to connect to the database.  It does not matter if it gets created from the automation app from the web UI, the error message is the same.  {"errors":[{"messageKey":"DATABASE_CONNECTION_INVALID_ERROR","args":{"name":"<HOSTNAME>"},"message":"Database connection <HOSTNAME> is invalid.","messageTemplate":"Database connection {name} is invalid.","detail":"Listener refused the connection with the following error:\nORA-12505, TNS:listener does not currently know of SID given in connect descriptor\n \r\n\r\nDiagnosis:\r\nIt seems like the provided database SID does not exist.\r\n\r\nPossible resolution:\r\nPlease provide the correct database SID"}]} So it would seem as the DB connect app does not like it when underscores are a part of the SID.  Has anyone else ran into this issue?
Hi everyone, I am new to splunk. I am looking at windows event logs for the EventCode=4725 for all usernames within a week's timeframe. What I want is to remove username results if there are more tha... See more...
Hi everyone, I am new to splunk. I am looking at windows event logs for the EventCode=4725 for all usernames within a week's timeframe. What I want is to remove username results if there are more than 1 count for this eventcode including that username, and then list in a table to show the timestamp and username when the eventcode occurred. Example: Usernames with EventCode=4725 recorded within 1 week:   Day 1 10pm : anna Day 1 11pm : betty Day 3 10pm : anna Day 3 1pm :  charlie Day 7 2pm : zach   Final result I want is: Day 1 11pm : betty Day 3 1pm :  charlie Day 7 2pm : zach From the above we have 'anna' removed completely from as her event showed up more than once.    This is my original query: index=wineventlog EventCode=4725 | fields * | eval timestamp=strftime(_time, "%Y-%m-%dT%H:%M"%S") | stats count by username | where username = 1 I then realised the problem with using stats count by,  because I wouldnt be able to show the timestamp for the results result this is in statistics.  I have thought of using dedup to remove duplicate values, but I have not found a way to remove duplicate values including that value itself. Please help. Thank you
I have an question/Issue with the use of rising option in DB Connect. I'm using Splunk Ent v 9.0x and DB connect 3.9.0 Im trying to understand how DB connect works with the rising option.  My issu... See more...
I have an question/Issue with the use of rising option in DB Connect. I'm using Splunk Ent v 9.0x and DB connect 3.9.0 Im trying to understand how DB connect works with the rising option.  My issue is that when i run it, the DB server is increasing the memory and swapping on the disk in order to return the results. The database im trying to read has more than100Million entries. What i did with the configuration of the connection was: 1. When i read the data from the DB to get the results  the server responds quickly. 2. When i add the           select * from DB where ID > ? ORDER By ID ASC         the server times out, so i increased the timeout value and i eventually got the option to add the ID number i wanted to track from and proceed to the next window. 3.  In the "MAX Rows to Retrieve" field i  added 50000 and left the Fetch Size default "300". Execution policy was every 10minutes. i was monitoring the server database and noticed that each request splunk db connect makes,  produces   issues on the server.   Looks like the "order by ID ASC"  makes the DB server to sort all the table, which increases the memory consumption and disk swapping. Returns ALL (i guess) the results and splunk ingests only 50.000 events as specified in the "max rows to retrieve" i checked also the queries splunk run on the DB side, and the query string didn't show any limit , for example  "SELECT * FROM  DB WHERE ID > ? order by ID ASC  Limit 50000; ID field is indexed. Is Splunk DB connect always asking the Database server to sort ALL the records ? How can i import the data, without making performance issue on the DB server?  Can i add the limit option in the config file for example ? EDIT: IDK if DB connect would work if i put at the end of the query in the db_inputs.conf <SELECT * FROM  DB WHERE ID > ? order by ID ASC  Limit 50000>. Any thoughts? Or i could create  batch jobs to slowly import data till today and then use the "default" rising string? The question here is if the ORDER  bY asc would still make performance issues to the DB server?      
Good afternoon! Please tell me, on the following request, unfortunately I'm not so familiar with spl to issue a working version now (( This request is required for notification. Let me describe th... See more...
Good afternoon! Please tell me, on the following request, unfortunately I'm not so familiar with spl to issue a working version now (( This request is required for notification. Let me describe the workflow first: We have a post thread, 12 posts. Each message has a unique routepointID field, the values ​​of this field begin with numbers and with each message in the chain, the value grows: 1.pointID, 2.pointID (this is an example). The notification should be processed if one of the messages came out out of order, for example: Received 1.pointID, 2.pointID waiting for 3.pointID, and comes: 4.pointID need to run an alert. An example of our query to find message threads from a message flow: index="main" sourcetype="testsystem-script4" | eval srcMsgId_Исх_Сообщения=if(len('Correlation_srcMsgId')==0 OR isnull('Correlation_srcMsgId'),'srcMsgId','Correlation_srcMsgId') | eval timeValue='eventTime' | eval time=strptime(timeValue,"%Y-%m-%dT%H:%M:%S.%3N%Z") | sort -eventTime | streamstats values(time) current=f  window=1 as STERAM_RESULT  global=false by srcMsgId_Исх_Сообщения | eval diff=STERAM_RESULT-time | stats list(diff)  as TIME_DIF list(eventTime) as eventTime list(srcMsgId) as srcMsgId_Бизнес_Сообщения list(routepointID) as routepointID count as  Кол_Сообщений by srcMsgId_Исх_Сообщения
Hello, I'm using a timechart with the following block for allowing the user to select a specific area and see stats on it.     <selection> <set token="TOKEN_TIME_selection.e... See more...
Hello, I'm using a timechart with the following block for allowing the user to select a specific area and see stats on it.     <selection> <set token="TOKEN_TIME_selection.earliest">$start$</set> <set token="TOKEN_TIME_selection.latest">$end$</set> </selection>     However, the chart has an area of interest that I can detect and that I want to be selected by default. I still want to show the whole chart (so I can't to use <earliest> and <latest>), but it would be nice to show it with the area of interest already selected by default, and therefore showing stats for that. To clarify what I mean, this is what I would like to show by default: I can't find anything in the documentation to define the <selection> area in-code, so I would like to know if there is any way to do it. Thank you!
hi All, HF's OS was recently migrated to RHEL from centos. Since then HF's are not sending any input data to splunk. though i can see internal logs.: I can see in internal logs error : Cooked c... See more...
hi All, HF's OS was recently migrated to RHEL from centos. Since then HF's are not sending any input data to splunk. though i can see internal logs.: I can see in internal logs error : Cooked connection to ip= <indexer ip> timed out  other error i can see is : message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_assist/bin/uiassets_modular_input.py" splunk.AuthenticationFailed: [HTTP 401] Client is not authenticated. need help as almost 9 forwarders are not reporting right now.