All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have HTTP event collector well configured using token from a client.  Now i want to understand where does these json format events gets stored?  I mean the exact json logs which are coming via HE... See more...
I have HTTP event collector well configured using token from a client.  Now i want to understand where does these json format events gets stored?  I mean the exact json logs which are coming via HEC, are they stored somewhere in our splunk environment ? 
Hi all, I would like to extract the IP of the client: from the below Message. Message=Internal event: A client issued a search operation with the following options. Client: 172.25.1.250:6247 Sta... See more...
Hi all, I would like to extract the IP of the client: from the below Message. Message=Internal event: A client issued a search operation with the following options. Client: 172.25.1.250:6247 Starting node: DC=abc,DC=contoso,DC=com,DC=au Filter: ( & ( ! (uSNChanged=*) ) ( & ( | (mail=*) (proxyAddresses=*) ) ( | (objectClass=contact) (objectClass=publicFolder) (objectClass=group) (objectClass=person) (objectClass=organizationalPerson) (objectClass=user) (FALSE) ) ) ) Search scope: subtree Attribute selection: sAMAccountName,mail,proxyAddresses,objectClass,uSNChanged I can make it works on regex101 but splunk does not show anything. | rex field=Message max_match=0 "Client:(?<Client>\n.*)"      
ES 6.0.2 is python 2/3 but in the Release Notes: “However, this release is not completely dual Python 2 and Python 3 compatible, as it requires the Python 2 interpreter that ships with Splunk Enterpr... See more...
ES 6.0.2 is python 2/3 but in the Release Notes: “However, this release is not completely dual Python 2 and Python 3 compatible, as it requires the Python 2 interpreter that ships with Splunk Enterprise 8.0”.Does this mean it’s only compatible with Splunk 8.0?    
ES 6.0.2 is Splunk 8.0 compatible and python 2/3 compatible. ES 6.0.2 ships with MLTK 4.4. MLTK 4.4 is not 8.0 compatible. Also, MLTK relies on PSC. Is that shipped with ES too?
I want to upgrade ITSI from 4.2.1 to 4.4.4 on Splunk core 7.2.9.1. Regarding compatibility matrix,  https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/CompatMatrix.    I would... See more...
I want to upgrade ITSI from 4.2.1 to 4.4.4 on Splunk core 7.2.9.1. Regarding compatibility matrix,  https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/CompatMatrix.    I would upgrade ITSI first and then core to 8.0.4.   The question is whether we have to have ITSI and MLTK at Python 2 prior to core upgrade to avoid incompatibility, or do we accept the incompatibility when we upgrade ITSI 4.4.4 and MLTK 5.x, knowing it should be compatible once the Splunk core upgrade is completed?   Further information about MLTK on ITSI .  https://docs.splunk.com/Documentation/ITSI/4.4.4/Install/Compatibility  
hello  i have a table like this  action user_name success user1 fail user1 fail user2 fail user1 fail  user1 success user2 fail user2 fail user1 fail ... See more...
hello  i have a table like this  action user_name success user1 fail user1 fail user2 fail user1 fail  user1 success user2 fail user2 fail user1 fail user2 fail user2 i want to show by users all the action (success) if the last 3 previous action = fail (user)   
The search head in the Non-Prod environment will not be active and would only be turned on in the event of a disaster where the Production SH is down. I was thinking about enabling an rsync between ... See more...
The search head in the Non-Prod environment will not be active and would only be turned on in the event of a disaster where the Production SH is down. I was thinking about enabling an rsync between both search heads so that the conf. files and knowledge objects from the Prod SH are regularly synced over to the Non-Prod SH. Does anyone have any suggestions or better approaches?
I was able to get the average CPU time. However, I am getting a result as below. Workload CPU_TIME AVG_TIME PART A 3.5   PART B 2485.4   AVG_TIME   226.26   I want to get th... See more...
I was able to get the average CPU time. However, I am getting a result as below. Workload CPU_TIME AVG_TIME PART A 3.5   PART B 2485.4   AVG_TIME   226.26   I want to get the avg time value under the same column as the CPU_TIME. here is the query that I have | fields SMF30JBN DATETIME SMF30CPT | eval Job_Name=SMF30JBN, Date = substr(DATETIME,1,10) | eval WORKLOAD = substr(Job_Name,1,3) | eval CP_Time=SMF30CPT | eval cpu_time=strptime(SMF30CPT,"%H:%M:%S.%2N") | eval base=strptime("00:00:00.00","%H:%M:%S.%2N") | eval ctime=cpu_time-base | eval ctime=round(ctime, 2) | stats sum(ctime) as CPU_TIME by WORKLOAD | eval SYST = substr(WORKLOAD,1,1) | eval TYPE = case(SYST = "F", "PART A PROD",SYST = "M", "PART B PROD") | appendpipe [| stats sum(CPU_TIME) as CPU_TIME by TYPE | eval WORKLOAD="".TYPE." CPU_TIME"] | fields WORKLOAD CPU_TIME | append [search index=cds_ffs_smf030 SMFID=EDCA sourcetype=syncsort:smf030 SMF30STP=5 | fields SMF30JBN DATETIME SMF30CPT | eval Job_Name=SMF30JBN, Date = substr(DATETIME,1,10) | eval WORKLOAD = substr(Job_Name,1,3) | eval CP_Time=SMF30CPT | eval cpu_time=strptime(SMF30CPT,"%H:%M:%S.%2N") | eval base=strptime("00:00:00.00","%H:%M:%S.%2N") | eval ctime=cpu_time-base | eval ctime=round(ctime, 2) | stats sum(ctime) as CPU_TIME by WORKLOAD | stats avg(CPU_TIME) as AVG_TIME | eval AVG_TIME = round(AVG_TIME, 2) | eval WORKLOAD="AVG_TIME"]
Hey All, Finishing the configuration and tuning of the Microsoft 365 App for Splunk and on the overview dashboard under M365 a single panel is having an issue with a macro. Error message: Error in... See more...
Hey All, Finishing the configuration and tuning of the Microsoft 365 App for Splunk and on the overview dashboard under M365 a single panel is having an issue with a macro. Error message: Error in 'SearchParser': The search specifies a macro 'o365_service_status' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macro has not been shared with this application. Click Settings, Advanced search, Search Macros to view macro information. I checked the app's default and local folders and in the GUI and don't even see this macro anywhere. Anyone have any clue what this macro holds so that I can create it to get this dashboard panel working? Thanks, Andrew
I have a search which produces a list of fields in an output table, including a user ID. I want to take the at ID, search another index, and add additional output columns to the table. Functionally i... See more...
I have a search which produces a list of fields in an output table, including a user ID. I want to take the at ID, search another index, and add additional output columns to the table. Functionally it behaves like this:   | makeresults | eval requesting_user="david" | appendcols [search index=admon sAMAccountName=$requesting_user$ earliest=0 latest=now | stats last(mail) as mail, last(givenName) as givenName, last(cn) as cn]   In the end, I want a single row with the requesting_user, mail, givenName and cn fields. But I'm not quite sure how to join these two searches together into a single row of output. I've experimented with appendcols, appendpipe, append, and map. Only map seems to be able to read the requesting_user token, but seems to throw away the requesting_user field. The rest of the commands I've tried don't seem to be able to read the token or something else is going on, because I only get null values for those fields. When I execute the appendcols command substituting the token for the actual user name, it retrieves the values I want.  Can anyone help me understand how include the fields from the bottom search into the output table of the top search?
Hi, Given the below search:     index="my_index" source="mysource" _index_earliest=-1h | rex field=_raw "\:\sPT(?P<respt>\d+\.\d+)" | fields response_time_ms respt| convert num(respt) as resp... See more...
Hi, Given the below search:     index="my_index" source="mysource" _index_earliest=-1h | rex field=_raw "\:\sPT(?P<respt>\d+\.\d+)" | fields response_time_ms respt| convert num(respt) as response_time_ms | eval response_time_ms = response_time_ms*1000 | stats exactperc95(response_time_ms) as myPercent     Works fine in the UI, but when I try to execute through the API i get no data back and an obscure message.     myPercent Specified field(s) missing from results: 'response_time_ms' myPercent       Can't figure out what is going on, I have listed the field in "rf" with no luck. I do have search API working for other info, so I know credentials and my POST is accurate. Is there a limitation on stats? Thanks   Chris
I was wondering if there is an undocumented way to give the Ruby Agent the Unique Host ID on a server that is also running a Standalone Machine Agent. I would like to see app and server metrics assoc... See more...
I was wondering if there is an undocumented way to give the Ruby Agent the Unique Host ID on a server that is also running a Standalone Machine Agent. I would like to see app and server metrics associated in the dashboard. I know this is possible with some of the other app agents (PHP, .Net, ...).
I am trying to create a table something like this that will fetch the data for all the events for the past 7 days. I want this to be ran any time and it would update the dates dynamically at the top.... See more...
I am trying to create a table something like this that will fetch the data for all the events for the past 7 days. I want this to be ran any time and it would update the dates dynamically at the top. I want it to look something like this: Host Sourcetype 6/1 6/2 6/3 6/4 6/5 6/6 6/7 " " " " Count Count Count Count Count Count count   Here is my search so far: index=" " | bucket span=1d _time | eval dayOfDate=strftime(_time,"%Y/%m/%d") | stats count by host, sourcetype, dayOfDate | table host sourcetype dayOfDate count | rename count as "Number of events"   The dates show vertically and I wanted it to show up as column headers for the last 7 days that updates dynamically whenever I run this search.
Hi, I'm pulling Tenable IO logs into Splunk and there is a field names first_found in regard to a vulnerability. The format is UNIX. I'd like to take that field data and create a new field and format... See more...
Hi, I'm pulling Tenable IO logs into Splunk and there is a field names first_found in regard to a vulnerability. The format is UNIX. I'd like to take that field data and create a new field and format it as $m/$d/$y at search time. I've scoured this site and reddit and can't get it to work. Here is an example: first_found = 2020-05-27T04:17:39.159Z would like to create: new_date = 5/27/2020 I've tried the below, but with no luck: (search query) | convert timeformat="%m/%d/%y" ctime(first_found) AS new_date Any help would be appreciated.
Good Afternoon, Hi, I have some questions regarding the data in the ADP Mobile client report: How often does this data update? Total Cumulative Users – Does this include terminated employees who... See more...
Good Afternoon, Hi, I have some questions regarding the data in the ADP Mobile client report: How often does this data update? Total Cumulative Users – Does this include terminated employees who previously registered for the app? Total Active Users – what is the criteria for this? Logged on within the last 30, 60, 90 days? It’s July 1st and I tried running data for the previous month to get June data. Nothing came up for Total active users, why is this? Thank you! Keri Do you have a sheet that explains what all of the metrics mean? That would be very helpful.
I have a batch file that runs this:   break > D:\....\test.txt break > D:\....\test2.txt process1 >> D:\....\test.txt process2 >> D:\....\test2.txt   Where each process appends text to test.txt. ... See more...
I have a batch file that runs this:   break > D:\....\test.txt break > D:\....\test2.txt process1 >> D:\....\test.txt process2 >> D:\....\test2.txt   Where each process appends text to test.txt. I included break at the beginning so that it will clear out the test.txt file and prevent it from becoming too large.  I want all the contents of the .txt file to be forwarded as one event. As of right now, it's forwarding the events one line at a time of each test.txt. How can I forward these events as one single event with multiple lines? They seem to be cut off due to line spacing/indentation so I want to turn that off as well.
Hi everyone! Recently, I'm trying to standardize our instrumentation process. I was checking the folder directory of AppDynamics Java Agent and found out that there are two javaagent.jar. One on the... See more...
Hi everyone! Recently, I'm trying to standardize our instrumentation process. I was checking the folder directory of AppDynamics Java Agent and found out that there are two javaagent.jar. One on the root directory, and another one inside the version folder.    What's the difference between the top-level javaagent.jar and the one inside the version folder? Thinking of the long term, there might be times that we need to update the java agent on the target host. When updating the java agent, we might need to update environment variables, config files, and other critical metrics. With this, is there a possible way to point to the java agent using a standard (non-changing) directory? 
Hello All, In the SH members, we are receiving the following error, while this is not showing up on the deployer, here is a screenshot (this is PCI compliance App) Thank you!
Recently, I am planning to standardize our process on agent installation of AppDynamics. The current process would be to extract the zip file to a specific folder (/opt/appdynamics for Machine Agent)... See more...
Recently, I am planning to standardize our process on agent installation of AppDynamics. The current process would be to extract the zip file to a specific folder (/opt/appdynamics for Machine Agent). Once the Machine Agents are reported running, we can view the machine agent version on our Controller.  Thinking of the long term, there might be times when we need to update the agents deployed on the target hosts. Are there any ways to check the version of the machine agent deployed on the server? I'm thinking of a text file they can open to see the current version of the machine agent deployed.
Hi All, I am using Splunk Enterprise 7.3.6 and access to my application occurs with ID (can be a number or string with no special characters), email and domain login (like domain-subdomain\xxxx) I ... See more...
Hi All, I am using Splunk Enterprise 7.3.6 and access to my application occurs with ID (can be a number or string with no special characters), email and domain login (like domain-subdomain\xxxx) I tried to Splunk IFX (Interactive Field Extraction) to extract failed logins which can be ID, email or domain login. But it seems splunk is not able to get all the three via IFX. Since email and domain login contains special characters, it retrieves values until the special character alone. Below are the samples User login failed: abcde  #Extract works User login failed: 12345  #Extract works User login failed: abcd@gmail.com  #Extract picks till abcd alone User login failed: mydom-sub\abcde  #Extract picks till mydom How do I include all these above scenarios in one extract using IFX. Any suggestions and advise would be helpful I have no idea on regex hence I never tried it and it looked better to use IFX who are new to splunk Thanks