All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a single instance Splunk Enterprise deployment running on Linux. I have a bunch of data feeding into my indexer from a number of Universal Forwarders on the network. My indexer is both indexin... See more...
I have a single instance Splunk Enterprise deployment running on Linux. I have a bunch of data feeding into my indexer from a number of Universal Forwarders on the network. My indexer is both indexing this data and on-forwarding it to a Heavy Forwarder on my network. The Heavy Forwarder then forwards my log data off to a third party system. This has all been working well. I am attempting to configure my Heavy Forwarder so that it forwards it's _internal logs back to my indexer but can't get it working. In order to get the Heavy Forwarder forwarding _internal logs back to my Indexer, I created an app on the Heavy Forwarder /opt/splunk/etc/apps/forward_internal_back2_Indexer. Inside this app I placed the following files: _____________________________________ default/inputs.conf [monitor//$SPLUNK_HOME/var/log/splunk/splunkd.log/splunk/splunkd.log] disabled=0 sourcetype=splunkd index=_internal [monitor//$SPLUNK_HOME/var/log/splunk/splunkd.log/splunk/metrics.log] disabled=0 sourcetype=splunkd index=_internal _____________________________________ default/props.conf [splunkd] TRANSFORMS-routing=routeBack2Indexer _____________________________________ default/transforms.conf [routeBack2Indexer] REGEX=(.) DEST_KEY=_TCP_ROUTING FORMAT=HF_internallogs_to_indexer _____________________________________ default/outputs.conf [tcpout:HF_internallogs_to_indexer] server = <ip_address_of_splunk_indexer>:9997 _____________________________________ Once I had done this I restart splunkd on the Heavy Forwarder, However I can't seem to see _internal logs coming back from my Heavy Forwarder host. would appreciate some help, figuring out where I've gone wrong
The default/props.conf for v1.0.0 of this add-on has a typo.  In the line that starts with "FIELDALIAS-firewall_pkts_in_out", the destination field is currently written as "packtes_in" - it (presumab... See more...
The default/props.conf for v1.0.0 of this add-on has a typo.  In the line that starts with "FIELDALIAS-firewall_pkts_in_out", the destination field is currently written as "packtes_in" - it (presumably) should be "packets_in".  
Hi i have several web servers that work on same host (different port) or different host. the best to say that they are work are use curl command, like below curl -s http://192.168.1.1:8000 | gr... See more...
Hi i have several web servers that work on same host (different port) or different host. the best to say that they are work are use curl command, like below curl -s http://192.168.1.1:8000 | grep login Now question is is there any way to monitor these services without pain in splunk? Any add on(like nmon)? or should writ script to create logfile like below then index it on splunk? TIMESTAMP service 1, up TIMESTAMP service 2, down   any idea?  Thanks 
Hello, I've been working with the add-on python code option for some time now and I find it very useful and easy when it comes to sending events to the splunk(using the ew.write_event() function). Ar... See more...
Hello, I've been working with the add-on python code option for some time now and I find it very useful and easy when it comes to sending events to the splunk(using the ew.write_event() function). Are there other functions, provided by splunk, to create dashboards and panels (such as <object>.create_dashboard()) that I could use (besides using rest API).
i created a custom python api script and it works fine and i want to import in splunk so i put my script. "C:\\Program Files\\Splunk\\etc\\apps\\search\\bin\\sample.py" I run cmd and the result i... See more...
i created a custom python api script and it works fine and i want to import in splunk so i put my script. "C:\\Program Files\\Splunk\\etc\\apps\\search\\bin\\sample.py" I run cmd and the result is getting correctly in splunk i created data inputs -> scripts -> select my scripts -> select source type _json -> app context App Browser -> selected index but i am not getting any json results in splunk search index Is there any configuration needed? when i check input.config it is already correctly the file details, so why splunk index doesn't show any json data? [script://$SPLUNK_HOME\etc\apps\search\bin\sample.py] disabled = false host = home index = jsearch interval = 60.0 sourcetype = _json   
Hey everyone! We're currently in the process of getting ready to deploy a Splunk Cloud instance to migrate our local on-prem version from. Currently, our environment is a hodge-podge of installs, in... See more...
Hey everyone! We're currently in the process of getting ready to deploy a Splunk Cloud instance to migrate our local on-prem version from. Currently, our environment is a hodge-podge of installs, including completely unmanaged universal forwarders, a couple heavy forwarder clusters, and so on. We also have resources both in our local datacenter and in various cloud providers.  I've been of the thought for a while that we should toss the deployment servers into a container environment. I was curious if anyone had experience with doing this? Here's the design I want to build towards: Running at least two instances of Splunk Enterprise, so that we have redundancy and load balancing and can transparently upgrade The instances would not have any indexer or search head functionality, per Splunk's best practices Ideally, the instances would not have any web interfaces, because everything would be code managed All the instances would be configured to talk up to the Splunk Cloud environment as part of their initial deploy All of the instances would use a shared storage location for their apps, including self-configuration for anything beyond the initial setup. This shared storage location would be git-controlled. In an ideal world, the individual Splunk components would not care which deployment server they talked to - they would just check in to a load balanced URI. Now, I know this is massively over-engineering the solution. We've got a couple thousand potential endpoints to manage, so a single standalone deployment server would do the trick. But I want to try this route for two reasons. First, I think it will scale better - especially if I get it agnostic enough that we can use it to deploy to AWS or Azure and get cloud-local deployment servers. Second, and perhaps more importantly, I want to practice and stretch my skills with containers. I've already worked with our cloud team to build out a Splunk Connect for Kubernetes setup in order to monitor our pods and Openshift environment. I want to take this opportunity to learn.
Hello to all friends Because I have very large data, I changed the value of maxresultrows But when I use the dbxquery command, I get the following error. Is there a solution?
Hi there, I am new to splunk and  struggling to join two searches based on conditions .eg. left join  with field 1 from index2  if field1!=" " otherwise  left join with field 2 from index 2. Field ... See more...
Hi there, I am new to splunk and  struggling to join two searches based on conditions .eg. left join  with field 1 from index2  if field1!=" " otherwise  left join with field 2 from index 2. Field 2 is only present in index 2.and Field 1 is common in  I  have two spl  giving right result when executing separately . I don't know how to merge both spl based on above condition to get complete result.  Thanks
I have a windows esxi server and installed splunk on this server and installed "Splunk Add-on for Windows" and created a local file in the Splunk folder and input.config is used Wineventlog is enable... See more...
I have a windows esxi server and installed splunk on this server and installed "Splunk Add-on for Windows" and created a local file in the Splunk folder and input.config is used Wineventlog is enabled and local event log is received in Splunk this server installed "cyberark" also for client to access this esxi server using from "remote desktop connection" Question: Does my local esxi server splunk get an event log to get login details for someone to access my "remote desktop connection"? my event log is currently receiving the local event log and there is no srcip and no port, ip address details and everything is empty Maybe Splunk runs locally and gets a local event log, meaning it doesn't show any ip address and port or srcip sections in the g event? I need to receive if someone accesses my machine from "remote desktop connection" then the event log I want to receive the IP address details is required, do I need to change any input.config to receive the address information IP correctly? Should i create a stanza in input.config to receive the login event log in splunk ?like this example? [WinEventLog:Microsoft-Windows-TerminalServices-RemoteConnectionManager/Operational] disabled = 0 index = wineventlog start_from = oldest current_only = 0 checkpointInterval = 5 renderXML = false or [WinEventLog:Microsoft-Windows-TerminalServices-LocalSessionManager/Operational] disabled = 0  
I have a key:value for db names but need only the first part. Example Current DBNAME : db001_inst1:schemanamexyx Or DBNAME : db01_inst1:schemanamexyx Requested REX statement to provide only... See more...
I have a key:value for db names but need only the first part. Example Current DBNAME : db001_inst1:schemanamexyx Or DBNAME : db01_inst1:schemanamexyx Requested REX statement to provide only the values in front of the colon.  I.E., db001_inst1 or db01_inst1
The special characters of the result of my question is converted to HTML Name and output like &quot; and &lt. What are the conditions that are converted? I want the result number 2, 3, 4. My vers... See more...
The special characters of the result of my question is converted to HTML Name and output like &quot; and &lt. What are the conditions that are converted? I want the result number 2, 3, 4. My version of Splunk is 8.2.6 1. search query : | makeresults | eval text="@@@javascript&colon;" | eval text=replace(text, "@@@", "\"") | table text  result : &quot;javascript&colon;  2. search query : | makeresults | eval text="@@@javascript" | eval text=replace(text, "@@@", "\"") | table text  result : "javascript 3. search query : | makeresults | eval text="@@@javascripta:" | eval text=replace(text, "@@@", "\"") | table text  result : "javascripta: 4. search query : | makeresults | eval text="@@@javascripa:" | eval text=replace(text, "@@@", "\"") | table text  result : "javascripa: 5. search query : | makeresults | eval text="@@@javascript&colon;" | eval text=replace(text, "@@@", "<") | table text  result : &lt;javascript&colon;  
Hi all -  The old MS DNS TA had a mapping for sourcetype MSAD:NT6:DNS, as shown here: https://docs.splunk.com/Documentation/DCDNSAddOn/1.0.1/TA-WindowsDNS/Sourcetypes Now, as we all know this TA i... See more...
Hi all -  The old MS DNS TA had a mapping for sourcetype MSAD:NT6:DNS, as shown here: https://docs.splunk.com/Documentation/DCDNSAddOn/1.0.1/TA-WindowsDNS/Sourcetypes Now, as we all know this TA is retired and absorbed into the main Windows TA... however, the Windows TA has no mappings at all for Network Resolution data model, and shows that the sourcetype MSAD:NT6:DNS doesn't map to any data model.  I get that  there are other better ways... but is there some reason we can't have the old DNS mappings in the Windows TA? https://docs.splunk.com/Documentation/AddOns/released/Windows/SourcetypesandCIMdatamodelinfo    
I need to alert on a threshold. I would like to create an alert that looks at a source IP address and will alert me if that address attempts to connect to a threshold of devices over 445. So if Comp1... See more...
I need to alert on a threshold. I would like to create an alert that looks at a source IP address and will alert me if that address attempts to connect to a threshold of devices over 445. So if Comp1 makes connection to more than 50 devices over 445 within 5 mins, please alert me. Or something like that... Numbers are only for illustration.    Thanks. 
Hello, I have done field extraction for the nested JSON event using props.conf file.  Everything is working as expected but facing one issue based on my requitements. Sample JSON event, my props.co... See more...
Hello, I have done field extraction for the nested JSON event using props.conf file.  Everything is working as expected but facing one issue based on my requitements. Sample JSON event, my props.conf file, and the reequipments/issue are giving below.  Any help will be greatly appreciated, thank you so much. Sample Nested JSON Event: {"TIME":"20220622154541","USERTYPE":"DSTEST","UID":"TEST01","FCODE":"06578","FTYPE":"01","SRCODE":"0A1","ID":"v23488d96-a1283-4ddf-8db7-8911-DS","IPADDR":"70.215.72.231","SYSTEM":"DS","EID":"ASW-CHECK","ETYPE":"VALID","RCODE":"001","DETAILINFO":{"Number":"03d1194292","DeptName":"DEALLE","PType":"TRI"},"YCODE":"1204342"}  props.conf: [sourcetypename] CHARSET=UTF-8 EVENT_BREAKER_ENABLE=TRUE INDEXED_EXTRACTIONS=json KV_MODE=json LINE_BREAKER=([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD=30 NO_BINARY_CHECK=true SHOULD_LINEMERGE=true TIME_FORMAT=%Y%m%d%H%M%S TIME_PREFIX={"TIME":" TRUNCATE=2000 category=Custom disabled=false pulldown_type=true   Issue/Requirements: I am getting Key/Value pair for the nested Key/Field DETAILINFO as DETAILINFO.Number = 03d1194292 DETAILINFO.DeptName = DEALLE DETAILINFO.PType = TRI My requirement:  "DETAILINFO" Key/Value pair should show up like below after the extraction: DETAILINFO ="Number":"03d1194292","Dept name":"DEALLE","PType":"TRI" OR DETAILINFO= {"Number":"03d1194292","Dept name":"DEALLE","PType":"TRI"}
Hello all, I need to preface this with the disclaimer that I am a relative Splunk neophyte so if you can / do choose to help, do not hesitate to keep it as knuckle-dragging / mouth-breather proof as ... See more...
Hello all, I need to preface this with the disclaimer that I am a relative Splunk neophyte so if you can / do choose to help, do not hesitate to keep it as knuckle-dragging / mouth-breather proof as possible..... Issue:      An individual machine with a UF instance appears to have only sent security logs from around Apr 2022 to be ingested, despite:      (a)  the Splunk instance on this local machine has been up and running since 2019      (b) the Splunk ES architecture has been in place and running since 2016, but none of those who                                  implemented it remain, nor is there any usable documentation on exactly how/ why certain                                    configuration choices were made       To comply with data retention requirements we need to ensure  that all previous local security logs from 2019 until now are ingested, confirmed to be stored, and then ideally deleted from the local machine to save storage space.      (a) the logs which seem to not have been ingested have been identified and moved to a separate location             from the current security log. Question:        What is the most efficient and accurate way of ensuring these logs are actually ingested in a distributed environment?  When looking through the documentation / various community threads and the Data Ingestion options (on our Deployment Server, License Master, various Search Heads, Heavy Forwarders, Indexers, etc.) I can't find anything that deals specifically with the situation I seem to be facing (existing deployment, select file ingestion from a specific instance) apart from physically going to the machine, which can be.....difficult. Any help / information / redirection would be greatly appreciated.
Inside the cloud trial I'm trying to install: Splunk Add-on for Cisco WSA Splunk Add-on for Linux It opens pop-up with: "Enter your Splunk.com username and password to download the app." ... See more...
Inside the cloud trial I'm trying to install: Splunk Add-on for Cisco WSA Splunk Add-on for Linux It opens pop-up with: "Enter your Splunk.com username and password to download the app." Entering the my credentials returns: "Incorrect username or password" I tried to add new user (with app role) with the same result. Anybody encountered this?
Hi, We recently upgraded our Splunk ITSI instance and choosing the font size on text has changed for glass tables. This seems simple but I can't figure it out. Looking at the documentation unde... See more...
Hi, We recently upgraded our Splunk ITSI instance and choosing the font size on text has changed for glass tables. This seems simple but I can't figure it out. Looking at the documentation under "Add text", the text button referenced is not there at all.  However, there is a button for "add markdown text" which does add text but I cannot change the font size. Referencing markdown language documentation, (expand the source options) this is what the code looks like:     { "type": "splunk.markdown", "options": { "markdown": "Health Score", "fontSize": "large" }, "context": {}, "showProgressBar": false, "showLastUpdated": false }       However, this has no effect on the font size. Any help is appreciated.
I'm trying to make a chart that shows me how long each individual is logged in, including weekends. This is for a closed system that only has a handful of users. I'm using this search to get the da... See more...
I'm trying to make a chart that shows me how long each individual is logged in, including weekends. This is for a closed system that only has a handful of users. I'm using this search to get the data, but I'm having a very difficult time getting it to chart out in a useable way.       source="wineventlog:security" action=success Logon_Type=2 (EventCode=4624 OR EventCode=4634 OR EventCode=4779 OR EventCode=4800 OR EventCode=4801 OR EventCode=4802 OR EventCode=4803 OR EventCode=4804 ) user!="anonymous logon" user!="DWM-*" user!="UMFD-*" user!=SYSTEM user!=*$ (Logon_Type=2 OR Logon_Type=7 OR Logon_Type=10) | convert timeformat="%a %B %d %Y" ctime(_time) AS Date | streamstats earliest(_time) AS login, latest(_time) AS logout by Date, host | eval session_duration=logout-login | eval h=floor(session_duration/3600) | eval m=floor((session_duration-(h*3600))/60) | eval SessionDuration=h."h ".m."m " | convert timeformat=" %m/%d/%y - %I:%M %P" ctime(login) AS login | convert timeformat=" %m/%d/%y - %I:%M %P" ctime(logout) AS logout | stats count AS auth_event_count, earliest(login) as login, max(SessionDuration) AS sesion_duration, latest(logout) as logout, values(Logon_Type) AS logon_types by Date, host, user      
Hi, I have created a customized Splunk table in JavaScript using TableView and SearchManager. How do I refresh the table on a button click in JavaScript.
Good afternoon I am uncertain which location to post this kind of question.  We are currently using Splunk IT essentials work to monitor some windows servers in our environment only.   under the in... See more...
Good afternoon I am uncertain which location to post this kind of question.  We are currently using Splunk IT essentials work to monitor some windows servers in our environment only.   under the infrastructure overview tab there is an option to activate hiding empty entity types.  By default it is off, is there a way to change this to on by default? See attached screen shot for clarification. Thank you