All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a key:value for db names but need only the first part. Example Current DBNAME : db001_inst1:schemanamexyx Or DBNAME : db01_inst1:schemanamexyx Requested REX statement to provide only... See more...
I have a key:value for db names but need only the first part. Example Current DBNAME : db001_inst1:schemanamexyx Or DBNAME : db01_inst1:schemanamexyx Requested REX statement to provide only the values in front of the colon.  I.E., db001_inst1 or db01_inst1
The special characters of the result of my question is converted to HTML Name and output like " and &lt. What are the conditions that are converted? I want the result number 2, 3, 4. My vers... See more...
The special characters of the result of my question is converted to HTML Name and output like &quot; and &lt. What are the conditions that are converted? I want the result number 2, 3, 4. My version of Splunk is 8.2.6 1. search query : | makeresults | eval text="@@@javascript&colon;" | eval text=replace(text, "@@@", "\"") | table text  result : &quot;javascript&colon;  2. search query : | makeresults | eval text="@@@javascript" | eval text=replace(text, "@@@", "\"") | table text  result : "javascript 3. search query : | makeresults | eval text="@@@javascripta:" | eval text=replace(text, "@@@", "\"") | table text  result : "javascripta: 4. search query : | makeresults | eval text="@@@javascripa:" | eval text=replace(text, "@@@", "\"") | table text  result : "javascripa: 5. search query : | makeresults | eval text="@@@javascript&colon;" | eval text=replace(text, "@@@", "<") | table text  result : &lt;javascript&colon;  
Hi all -  The old MS DNS TA had a mapping for sourcetype MSAD:NT6:DNS, as shown here: https://docs.splunk.com/Documentation/DCDNSAddOn/1.0.1/TA-WindowsDNS/Sourcetypes Now, as we all know this TA i... See more...
Hi all -  The old MS DNS TA had a mapping for sourcetype MSAD:NT6:DNS, as shown here: https://docs.splunk.com/Documentation/DCDNSAddOn/1.0.1/TA-WindowsDNS/Sourcetypes Now, as we all know this TA is retired and absorbed into the main Windows TA... however, the Windows TA has no mappings at all for Network Resolution data model, and shows that the sourcetype MSAD:NT6:DNS doesn't map to any data model.  I get that  there are other better ways... but is there some reason we can't have the old DNS mappings in the Windows TA? https://docs.splunk.com/Documentation/AddOns/released/Windows/SourcetypesandCIMdatamodelinfo    
I need to alert on a threshold. I would like to create an alert that looks at a source IP address and will alert me if that address attempts to connect to a threshold of devices over 445. So if Comp1... See more...
I need to alert on a threshold. I would like to create an alert that looks at a source IP address and will alert me if that address attempts to connect to a threshold of devices over 445. So if Comp1 makes connection to more than 50 devices over 445 within 5 mins, please alert me. Or something like that... Numbers are only for illustration.    Thanks. 
Hello, I have done field extraction for the nested JSON event using props.conf file.  Everything is working as expected but facing one issue based on my requitements. Sample JSON event, my props.co... See more...
Hello, I have done field extraction for the nested JSON event using props.conf file.  Everything is working as expected but facing one issue based on my requitements. Sample JSON event, my props.conf file, and the reequipments/issue are giving below.  Any help will be greatly appreciated, thank you so much. Sample Nested JSON Event: {"TIME":"20220622154541","USERTYPE":"DSTEST","UID":"TEST01","FCODE":"06578","FTYPE":"01","SRCODE":"0A1","ID":"v23488d96-a1283-4ddf-8db7-8911-DS","IPADDR":"70.215.72.231","SYSTEM":"DS","EID":"ASW-CHECK","ETYPE":"VALID","RCODE":"001","DETAILINFO":{"Number":"03d1194292","DeptName":"DEALLE","PType":"TRI"},"YCODE":"1204342"}  props.conf: [sourcetypename] CHARSET=UTF-8 EVENT_BREAKER_ENABLE=TRUE INDEXED_EXTRACTIONS=json KV_MODE=json LINE_BREAKER=([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD=30 NO_BINARY_CHECK=true SHOULD_LINEMERGE=true TIME_FORMAT=%Y%m%d%H%M%S TIME_PREFIX={"TIME":" TRUNCATE=2000 category=Custom disabled=false pulldown_type=true   Issue/Requirements: I am getting Key/Value pair for the nested Key/Field DETAILINFO as DETAILINFO.Number = 03d1194292 DETAILINFO.DeptName = DEALLE DETAILINFO.PType = TRI My requirement:  "DETAILINFO" Key/Value pair should show up like below after the extraction: DETAILINFO ="Number":"03d1194292","Dept name":"DEALLE","PType":"TRI" OR DETAILINFO= {"Number":"03d1194292","Dept name":"DEALLE","PType":"TRI"}
Hello all, I need to preface this with the disclaimer that I am a relative Splunk neophyte so if you can / do choose to help, do not hesitate to keep it as knuckle-dragging / mouth-breather proof as ... See more...
Hello all, I need to preface this with the disclaimer that I am a relative Splunk neophyte so if you can / do choose to help, do not hesitate to keep it as knuckle-dragging / mouth-breather proof as possible..... Issue:      An individual machine with a UF instance appears to have only sent security logs from around Apr 2022 to be ingested, despite:      (a)  the Splunk instance on this local machine has been up and running since 2019      (b) the Splunk ES architecture has been in place and running since 2016, but none of those who                                  implemented it remain, nor is there any usable documentation on exactly how/ why certain                                    configuration choices were made       To comply with data retention requirements we need to ensure  that all previous local security logs from 2019 until now are ingested, confirmed to be stored, and then ideally deleted from the local machine to save storage space.      (a) the logs which seem to not have been ingested have been identified and moved to a separate location             from the current security log. Question:        What is the most efficient and accurate way of ensuring these logs are actually ingested in a distributed environment?  When looking through the documentation / various community threads and the Data Ingestion options (on our Deployment Server, License Master, various Search Heads, Heavy Forwarders, Indexers, etc.) I can't find anything that deals specifically with the situation I seem to be facing (existing deployment, select file ingestion from a specific instance) apart from physically going to the machine, which can be.....difficult. Any help / information / redirection would be greatly appreciated.
Inside the cloud trial I'm trying to install: Splunk Add-on for Cisco WSA Splunk Add-on for Linux It opens pop-up with: "Enter your Splunk.com username and password to download the app." ... See more...
Inside the cloud trial I'm trying to install: Splunk Add-on for Cisco WSA Splunk Add-on for Linux It opens pop-up with: "Enter your Splunk.com username and password to download the app." Entering the my credentials returns: "Incorrect username or password" I tried to add new user (with app role) with the same result. Anybody encountered this?
Hi, We recently upgraded our Splunk ITSI instance and choosing the font size on text has changed for glass tables. This seems simple but I can't figure it out. Looking at the documentation unde... See more...
Hi, We recently upgraded our Splunk ITSI instance and choosing the font size on text has changed for glass tables. This seems simple but I can't figure it out. Looking at the documentation under "Add text", the text button referenced is not there at all.  However, there is a button for "add markdown text" which does add text but I cannot change the font size. Referencing markdown language documentation, (expand the source options) this is what the code looks like:     { "type": "splunk.markdown", "options": { "markdown": "Health Score", "fontSize": "large" }, "context": {}, "showProgressBar": false, "showLastUpdated": false }       However, this has no effect on the font size. Any help is appreciated.
I'm trying to make a chart that shows me how long each individual is logged in, including weekends. This is for a closed system that only has a handful of users. I'm using this search to get the da... See more...
I'm trying to make a chart that shows me how long each individual is logged in, including weekends. This is for a closed system that only has a handful of users. I'm using this search to get the data, but I'm having a very difficult time getting it to chart out in a useable way.       source="wineventlog:security" action=success Logon_Type=2 (EventCode=4624 OR EventCode=4634 OR EventCode=4779 OR EventCode=4800 OR EventCode=4801 OR EventCode=4802 OR EventCode=4803 OR EventCode=4804 ) user!="anonymous logon" user!="DWM-*" user!="UMFD-*" user!=SYSTEM user!=*$ (Logon_Type=2 OR Logon_Type=7 OR Logon_Type=10) | convert timeformat="%a %B %d %Y" ctime(_time) AS Date | streamstats earliest(_time) AS login, latest(_time) AS logout by Date, host | eval session_duration=logout-login | eval h=floor(session_duration/3600) | eval m=floor((session_duration-(h*3600))/60) | eval SessionDuration=h."h ".m."m " | convert timeformat=" %m/%d/%y - %I:%M %P" ctime(login) AS login | convert timeformat=" %m/%d/%y - %I:%M %P" ctime(logout) AS logout | stats count AS auth_event_count, earliest(login) as login, max(SessionDuration) AS sesion_duration, latest(logout) as logout, values(Logon_Type) AS logon_types by Date, host, user      
Hi, I have created a customized Splunk table in JavaScript using TableView and SearchManager. How do I refresh the table on a button click in JavaScript.
Good afternoon I am uncertain which location to post this kind of question.  We are currently using Splunk IT essentials work to monitor some windows servers in our environment only.   under the in... See more...
Good afternoon I am uncertain which location to post this kind of question.  We are currently using Splunk IT essentials work to monitor some windows servers in our environment only.   under the infrastructure overview tab there is an option to activate hiding empty entity types.  By default it is off, is there a way to change this to on by default? See attached screen shot for clarification. Thank you  
I've enabled Access Logging on an S3 bucket so I can have a record of when files are POSTed to the bucket. In addition to this, I've told the Splunk Add-On for AWS to look for new access log records ... See more...
I've enabled Access Logging on an S3 bucket so I can have a record of when files are POSTed to the bucket. In addition to this, I've told the Splunk Add-On for AWS to look for new access log records every 60 seconds in the bucket as well.  The problem is that I don't see these access logs in Splunk until hours (up to 3 or 4) after the files exist in the log on S3. If this is checking every minute, why am I not getting any results? I don't think it's a timezone issue, because if I am reading the configuration details correctly, it should just check for new records and not worry about timezones.   
Hi, I decided to spin up my Splunk home environment again, and I'm running into an issue this time while installing my UF 9.0 on my Raspberry Pi. It's a Pi 4 B running Ubuntu 22.04.1 LTS on aarch64... See more...
Hi, I decided to spin up my Splunk home environment again, and I'm running into an issue this time while installing my UF 9.0 on my Raspberry Pi. It's a Pi 4 B running Ubuntu 22.04.1 LTS on aarch64 architecture. I followed install instructions according to the installing a UNIX forwarder page from Splunk, and used the following bundle "splunkforwarder-9.0.0-6818ac46f2ec-Linux-armv8.tgz" . After getting some normal permissions things out of the way, I started the forwarder, this time it's giving me the error:       Invalid key in stanza [webhook] in /opt/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false).       Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug'   so after running splunk btool check --debug | grep ' No spec' and 'Invalid' (these are all the errors types btool reported on) it returns the following after a clean install:       No spec file for: /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/app.conf No spec file for: /opt/splunkforwarder/etc/apps/introspection_generator_addon/default/app.conf No spec file for: /opt/splunkforwarder/etc/apps/search/default/app.conf No spec file for: /opt/splunkforwarder/etc/apps/splunk_internal_metrics/default/app.conf No spec file for: /opt/splunkforwarder/etc/manager-apps/_cluster/default/indexes.conf No spec file for: /opt/splunkforwarder/etc/system/default/app.conf No spec file for: /opt/splunkforwarder/etc/system/default/conf.conf No spec file for: /opt/splunkforwarder/etc/system/default/federated.conf No spec file for: /opt/splunkforwarder/etc/system/default/telemetry.conf Invalid key in stanza [webhook] in /opt/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false).        I cannot really find answers on this topic. mostly related to other apps that people installed, but I only installed the universal forwarder, nothing else. I also am not sure what is the answer to the invalid key in the stanza for actions.conf and would like to know if there is a fix. I also found the following error, and read  online that it's not impacting the functionality of Splunk, but is there a way to suppress them and how can I be sure that it's not an issue?       Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforward       my /opt/ permissions:       splunk@hostname:/opt/splunkforwarder$ ls -lia /opt 148855 drwxr-xr-x 10 splunk splunk 4096 Aug 12 15:47 splunkforwarder       Any help would be appreciated on this. I am trying to get the cleanest start possible, because on my last run I had a problem with the way my data was being ingested (the 'sourcetype too small' problem) and i wasn't able to fix it back then. Kind regards
Hi, This is my first time starting a discussion. Please pardon my mistakes. So I am trying to perform a search where I can sort based  on a series of numbers occurring at the end of a text. example... See more...
Hi, This is my first time starting a discussion. Please pardon my mistakes. So I am trying to perform a search where I can sort based  on a series of numbers occurring at the end of a text. example: index=abc sourcetype=xyz  Entity=HI* Text="*Rejected message received - code 456" index=abc sourcetype=xyz  Entity=HI* Text="*Rejected message received - code 789" index=abc sourcetype=xyz  Entity=HI* Text="*Rejected message received - code 345" So I would like to sort count by the  3 digit code number. Is it possible to do it?
how can solve this ::: (Create a new field called "StartTime" and set the value to seven days ago from today, snapped to the beginning of the day) ???
How to get snmp v3 data from another tool[HP tools] to splunk? The Hp tools has a configuration where it can forward trap in snmpv3  to a splunk HF. But it requires certain credentials which should ... See more...
How to get snmp v3 data from another tool[HP tools] to splunk? The Hp tools has a configuration where it can forward trap in snmpv3  to a splunk HF. But it requires certain credentials which should be configured at HF end. Please let me know what all configurations has to be done at HF end for splunk to receive snmpv3 trap data.
I have below splunk which gets me all entityID's with count index=coreprod pod=xxxx CASE(xxxxxx) event=ack |stats count by entityId |where count>1 I want to list ONLY those entityID's where the d... See more...
I have below splunk which gets me all entityID's with count index=coreprod pod=xxxx CASE(xxxxxx) event=ack |stats count by entityId |where count>1 I want to list ONLY those entityID's where the difference between their occurrence is less than 1hr (0r xx min  
Hi All, We collected Fortinet fortigate logs to splunk. However, the incoming logs are in CEF format but do not match with the add-on, and there is a prefix "FTNTFGT" at the beginning of the fields... See more...
Hi All, We collected Fortinet fortigate logs to splunk. However, the incoming logs are in CEF format but do not match with the add-on, and there is a prefix "FTNTFGT" at the beginning of the fields. I am sharing a sample log below with you, do you need to make a config on the fortigate? <189>Aug 12 13:35:50 xxxx CEF:0|Fortinet|Fortigate|vxxx|00xxx|traffic:forward accept|3|deviceExternalId=xxxIxxxx FTNTFGTeventtime=1660300550574125940 FTNTFGTtz=+0300 FTNTFGTlogid=xxx cat=traffic:forward FTNTFGTsubtype=forward FTNTFGTlevel=notice FTNTFGTvd=xxx src=xxx spt=57425 deviceInboundInterface=xxx FTNTFGTsrcintfrole=lan dst=xxx dpt=18 deviceOutboundInterface=xxx FTNTFGTdstintfrole=wan FTNTFGTsrccountry=xxx FTNTFGTdstcountry=xxx externalId=xxx proto=6 act=accept FTNTFGTpolicyid=xxx FTNTFGTpolicytype=policy FTNTFGTpoluuid=xxxxxxx FTNTFGTpolicyname=xxxx duser=xxxxx FTNTFGTgroup=xxxx FTNTFGTauthserver=xxx app=HTTPS FTNTFGTtrandisp=xxx sourceTranslatedAddress=xxx sourceTranslatedPort=xxxx FTNTFGTappid=xxx FTNTFGTapp=xxxx FTNTFGTappcat=xxxx FTNTFGTapprisk=elevated FTNTFGTapplist=xxx FTNTFGTduration=xxx out=4348 in=2983 FTNTFGTsentpkt=38 FTNTFGTrcvdpkt=xx FTNTFGTsentdelta=123 FTNTFGTrcvddelta=104 FTNTFGTdevtype=Router FTNTFGTmastersrcmac=xxxxx FTNTFGTsrcmac=xxxxFTNTFGTsrcserver=0 @jerryzhao
Hello, I need to get the logs from an external app into my splunk cloud instance , where i can get the agent that i need to install at the linux APP server  ? where is the route where i can... See more...
Hello, I need to get the logs from an external app into my splunk cloud instance , where i can get the agent that i need to install at the linux APP server  ? where is the route where i can find this logs ? the logs should be in a json format    Thanks a lot , have a good day    
Hi, I have the following bar chart: The query for this bar chart is this: | inputlookup Migration-Status-All.csv | search Vendor = "McAfee" | stats count by "Migration Comments" | eve... See more...
Hi, I have the following bar chart: The query for this bar chart is this: | inputlookup Migration-Status-All.csv | search Vendor = "McAfee" | stats count by "Migration Comments" | eventstats sum(count) as Total | eval perc=round(count*100/Total,2) | eval dummy = 'Migration Comments' | chart sum(perc) over "Migration Comments" by dummy I need to have the "In Progress" bar to be yellow and the "Not Started" bar to be red ..... I tried using an eval and Case but didn't work for me. How can this be done? Many thanks!