All Topics

Top

All Topics

hey guys did someone ever happed to come through this problem. I'm using Splunk Cloud  I'm trying to extract a new field using regex but the data are under the source filed  | rex field=source "S... See more...
hey guys did someone ever happed to come through this problem. I'm using Splunk Cloud  I'm trying to extract a new field using regex but the data are under the source filed  | rex field=source "Snowflake\/(?<folder>[^\/]+)" this is the regex I'm using when i use it in the search it works perfect. but the main goal is to save this search as a permanent field. i know that the the field extraction draw from the "_raw" there is an option to direct the Cloud to pull from the source and save it a permanent field.
HI, I would like to know how can I create a new filter by field like "slack channel name" / phantom artifact id?  how its been created?  thanks!  
Hello, Can anyone assist in determining why my splunk instance ingest large amounts of data ONLY on the weekends?  This appears to be across the board for all hosts as near as I can tell.   I run t... See more...
Hello, Can anyone assist in determining why my splunk instance ingest large amounts of data ONLY on the weekends?  This appears to be across the board for all hosts as near as I can tell.   I run this command: index=_internal metrics kb series!=_* "group=per_host_thruput" earliest=-30d | eval mb = kb / 1024 | timechart fixedrange=t span=1d sum(mb) by series and it shows the daily ingest for numerous forwarders.  During the week it averages out but over the weekend it exceeds my daily ingest limit causing warnings.  I would like to be able to find out what the cause is and a possible solution so I can even out the ingestion so I dont get violations.   Much appreciated for any assistance!
Hi All, our SVC calculation is in _introspection and and our search name is in _internal and _audit. We need a common filed to map those together so we can tie an SVC (and dollar amount) to a partic... See more...
Hi All, our SVC calculation is in _introspection and and our search name is in _internal and _audit. We need a common filed to map those together so we can tie an SVC (and dollar amount) to a particular search. We tried doing it using the SID but that is not matching.  Can someone help me out here based on your experiences.
Hi, I know as part of SPL-212687 this issue was fixed in 8.2.7 and 9.0+ however we have had some hosts drop their defender logs after receiving a Windows Defender update. These UFs are on version 9.0... See more...
Hi, I know as part of SPL-212687 this issue was fixed in 8.2.7 and 9.0+ however we have had some hosts drop their defender logs after receiving a Windows Defender update. These UFs are on version 9.0.2 but have still reported this issue. Is there any known problem that would cause this?
I want to call lookup within case statement. if possible, please share sample query.
Need help with the extraction of an alpha numeric field. E.G. : ea37c31d-f4df-48ab-b0b7-276ade5c5312
Hi ,  I have two searches joined using join command. The first search i need to run earliest=-60mins and the second search is using summary index here i need to fetch all the results in summary inde... See more...
Hi ,  I have two searches joined using join command. The first search i need to run earliest=-60mins and the second search is using summary index here i need to fetch all the results in summary index so I need to check and run summary index for "all time" . How can this be done? I am giving earliest=-60min in my first search and time range as "all time" while scheduling the report consisting of this two searches but it is not working. I have not used any time in my summary index. Search to populate my summary index index=testapp sourcetype=test_appresourceowners earliest=-24h latest=now | table sys_id, dv_manager, dv_syncenabled, dv_resource, dv_recordactive | collect addtime=false index=summaryindex source=testapp. my scheduled report search  index=index1 earlies=-60m | join host [| search index=summaryindex earliest="alltime"] | tablehost field1 field2
Referring to the below inputs.conf for one of my windows server , as you can see, there is some whitespace at the end of the first line before the closing bracket. The "folderA" is the folder where ... See more...
Referring to the below inputs.conf for one of my windows server , as you can see, there is some whitespace at the end of the first line before the closing bracket. The "folderA" is the folder where the contents, splunk should be ingesting but is not (there are multiple log files inside). Is there a possibility that because of this whitespace Splunk may not be ingesting the logs? And if yes, any explanation on this so that we can explain/advise to the client. " [monitor://C:\Program Files\somepath\folderA ] index=someindex sourcetype=somesourcetype "
I'm trying to migrate kvstore on a v8.2 installation on Windows, but it fails early in the process. splunk migrate kvstore-storage-engine --target-engine wiredTiger ERROR: Cannot get the size of ... See more...
I'm trying to migrate kvstore on a v8.2 installation on Windows, but it fails early in the process. splunk migrate kvstore-storage-engine --target-engine wiredTiger ERROR: Cannot get the size of the KVStore folder at=E:\Splunk\Indexes\kvstore\mongo, due to reason=3 errors occurred. Description for first 3: [{operation:"failed to stat file", error:"Access is denied.", file:"E:\Splunk\Indexes\kvstore\mongo"}, {operation:"failed to stat file", error:"Access is denied.", file:"E:\Splunk\Indexes\kvstore\mongo"}, {operation:"failed to stat file", error:"Access is denied.", file:"E:\Splunk\Indexes\kvstore\mongo"}]  I've tried to do file operations on the folder and subfolders of E:\spunk\indexes\kvstore\mongo and everything seems ok. The mongod.log does not contain any rows from the migration. Any nudges in the right direction? Can I upgrade to 9.1 without migrating the store?  
<panel> <html> <div class="modal $tokShowModel$" id="myModal" style="border-top-left-radius:25px; border-top-right-radius:25px;"> <div class="modal-header" style="background:#e1e6eb; padding:2... See more...
<panel> <html> <div class="modal $tokShowModel$" id="myModal" style="border-top-left-radius:25px; border-top-right-radius:25px;"> <div class="modal-header" style="background:#e1e6eb; padding:20px; height:10px;"> <h3>Message:</h3> </div> <div class="modal-body" style="padding:30%"> <p style="color:blue;font-size:16px;"> This dashboard has been moved to azure. Kindly visit the following link - <a href="https://mse-svsplunkm01.emea.duerr.int:8000/en-US/app/GermanISMS/kpi_global_v5">Go here </a> </p> </div> </div> </html> </panel>   I have created this code which displays the pop up but the splunk dashboard is still working on the background ...Can anyone please help  me with an  idea about any script that I can add in this code to make the dashboard stop working in the background as well as the pop up should also display...      
Hello   i m unable to see data / tenant data in prod dashboards by searching by tenant id , cannot see tenant id but it is visible in lower domains , i have verified all beats metrics are installed... See more...
Hello   i m unable to see data / tenant data in prod dashboards by searching by tenant id , cannot see tenant id but it is visible in lower domains , i have verified all beats metrics are installed on servers
Hello, I'm trying to find average response time of all events after the field totalTimeTaken. Thing is, when I tested this regular expression on Regular Expression Site It shows I'm extracting the ... See more...
Hello, I'm trying to find average response time of all events after the field totalTimeTaken. Thing is, when I tested this regular expression on Regular Expression Site It shows I'm extracting the field and value correctly but, when I put the same into the Splunk statement it is not yielding the expected result.  Log:            {"Record: {"ATimeTaken":0, "BTimeTaken":0 ,"totalTimeTaken":4},{anotherFields}}         Query:         | makeresults ns=project* | eval _raw="\"totalTimeTaken\":4" | rex field=_raw "\"totalTimeTaken\":+(?<Response_Time>\d+)" | stats avg(response_time)           Could I know where I'm going wrong?
I tried to whitelist an ip address for HEC log ingestion and got the error message "Subnet overlaps Private IP block"   Does anyone know what this means? Thanks
I'm getting this error message in the log file, solnlib.credentials.CredentialNotExistException: Failed to get password of realm=.  According to this page, https://splunk.github.io/addonfactory-solut... See more...
I'm getting this error message in the log file, solnlib.credentials.CredentialNotExistException: Failed to get password of realm=.  According to this page, https://splunk.github.io/addonfactory-solutions-library-python/credentials/#solnlib.credentials.CredentialNotExistException , this is due to the username not being valid.  I'm trying to work out how to get what is passed to credentials.py since the information in the username doesn't make sense to me.  Is there anyway of debugging credentials.py, I tried to put print statements in, but the TA UI didn't like it.  I had to remove the print statements to get the UI working again.  I've tried debugging via command line but always get stuck at this point, session_key = sys.stdin.readline().strip().  I can't work out what I need to do to see where the user information is coming from.  Any help on how I can debug this? TIA, Joe
I am trying to create a dashboard to examine group policy processing errors.  I would like to create a drop-down based on the values returned for EventCode which is the Windows EventID. 1.  How do I... See more...
I am trying to create a dashboard to examine group policy processing errors.  I would like to create a drop-down based on the values returned for EventCode which is the Windows EventID. 1.  How do I create a dynamic drop-down to show the EventIDs (EventCode) returned by the search? 2.  I see you can enter a whole new search, but technically that is different than the main search, right?  How do I base it on the main search? 3.  What are Label (fieldForLabel) and Value (fieldForValue) for?  Why are they required?     <form version="1.1" theme="light"> <label>GP Errors</label> <fieldset submitButton="true" autoRun="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-90m@m</earliest> <latest>now</latest> </default> </input> <input type="text" token="Computername"> <label>Computer Name</label> <default>*</default> </input> <input type="dropdown" token="EventID"> <label>Event ID</label> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>EventID</fieldForLabel> <fieldForValue>EventID</fieldForValue> <search> <query>index=winevent source="WinEventLog:System" SourceName="Microsoft-Windows-GroupPolicy" Type=Error | stats values(EventCode)</query> <earliest>-90m@m</earliest> <latest>now</latest> </search> </input> </fieldset> <row> <panel> <table> <search> <query>index=winevent source="WinEventLog:System" SourceName="Microsoft-Windows-GroupPolicy" Type=Error host=$Computername$ EventCode=$EventID$ | table host, EventCode, Message, _time | rename host AS Host, EventCode AS EventID | sort _time desc</query> <earliest>-90m@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>    
We want all the hosts in index=aws that are NOT in index=windows.  Example :  | tstats count where index=aws by host | table host | search NOT [| tstats count where index=windows by host | table... See more...
We want all the hosts in index=aws that are NOT in index=windows.  Example :  | tstats count where index=aws by host | table host | search NOT [| tstats count where index=windows by host | table host]
Hi all,   We have a Splunk Intermediate Forwarder (Heavy Forwarder) set up to receive logs from Universal Forwarders that sit in different networks. The Forwarding is working fine for logs as we ca... See more...
Hi all,   We have a Splunk Intermediate Forwarder (Heavy Forwarder) set up to receive logs from Universal Forwarders that sit in different networks. The Forwarding is working fine for logs as we can see the internal logs and Windows Events in our index cluster. This issue is with the Windows Performance Metrics which aren't in our performance metrics indexes. I can see the Universal Forwarders are collecting the metrics from the Internal logs as these are being forwarded successfully.  Any suggestions would be helpful
hello all, I would need the logs to be sent to my S3 bucket smartstorage after 1 month from my security index, but they should still be accessible for another 5 months.
Hi,  app        https://splunkbase.splunk.com/app/5530       shows that it's cloud compatible but failing the vetting process while installation.