All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

i have installed Ubuntu & kali on virtualbox. i have installed DVWA application on ubuntu and now i have to install splunk forwarder in ubuntu and capture DVWA application logs when i aattack on dvwa... See more...
i have installed Ubuntu & kali on virtualbox. i have installed DVWA application on ubuntu and now i have to install splunk forwarder in ubuntu and capture DVWA application logs when i aattack on dvwa application via kali Vm then Alerts + logs has to generated and sent to Host window 10 where i installed splunk means directly sent to splunk on window10. i wanted to know how to install splunk forwarder and how to configure input config file and output config file and how to add monitor command  and i have tried installing Splunk forwarder but facing difficulty. Kindly connect and let me know   https://www.linkedin.com/in/shadoww-jin-b1b71a192/  
We are trying to configure DUO Connector in Splunk,  but we are getting " error credentials" and DUO documentation does not have a  lot information anyone got duo working in Splunk? 
I had the idea to upload our old ticketing systems data into splunk and create dashboards to search through the information instead of grep commands, I have a few csv files (9 to be exact) and was wo... See more...
I had the idea to upload our old ticketing systems data into splunk and create dashboards to search through the information instead of grep commands, I have a few csv files (9 to be exact) and was wondering the best way to move forward.   Questions to get me started:  Should I append them for one big CSV file? Should I index the CSV files? should I use a .zip file with all the CSVs inside?    
Hi everyone! I want to add a button to each row in my table under a field named 'details'. When this button is clicked, a pop up panel will expand with more details about this row of information. It ... See more...
Hi everyone! I want to add a button to each row in my table under a field named 'details'. When this button is clicked, a pop up panel will expand with more details about this row of information. It uses the uuid to pull this information. So far, I have the table with the functionality working to open this pop up panel. I just want the field to look like a button. Can anyone help me with this? Table field is 'details'.     <row> <panel> <title>Job Executions</title> <table> <search> <query>index=asvcardmadacquisitionsanalytics_qa uuid=* |search notebook=$notebook$| eval details = "Show Details" | table uuid, notebook, status, run_duration, details</query> <earliest>$_time.earliest$</earliest> <latest>$_time.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">cell</option> <drilldown> <condition field="details"> <set token="clicked_uuid">$row.uuid$</set> <set token="enable_pop_up"></set> <unset token="form.hide_panel"></unset> </condition> <condition field="uuid"></condition> <condition field="notebook"></condition> <condition field="status"></condition> <condition field="run_duration"></condition> </drilldown> <format type="color" field="status"> <colorPalette type="map">{"success":#02A524,"failed":#E50202}</colorPalette> </format> <format type="color" field="details"> <colorPalette type="map">{"Show Details": #66b3ff}</colorPalette> </format> </table> </panel> <panel depends="$enable_pop_up$"> <input type="checkbox" token="hide_panel"> <label></label> <choice value="hide">Hide</choice> <change> <condition value="hide"> <unset token="enable_pop_up"></unset> </condition> </change> </input> <table> <title>Pop up Details Panel</title> <search> <query>index=asvcardmadacquisitionsanalytics_qa | search uuid=$clicked_uuid$ | table uuid, bap, notebook, run_duration, status, yarn_application_id, dag_start_time, dag_end_time, region | transpose | rename column as "Details", "row 1" as "Value"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row>      
We have installed Gogs in our architecture. We are using Gogs as a VCS. Now we are interested in getting data in Splunk.  Please help me.
Hello all, I was wondering if anyone else has seen their event count drop (down to 10%?) after the FirePower team updates signatures on the Defense Center?  In the last couple months I saw this hap... See more...
Hello all, I was wondering if anyone else has seen their event count drop (down to 10%?) after the FirePower team updates signatures on the Defense Center?  In the last couple months I saw this happen twice, once I was running 'Firepower eNcore Add-On for Splunk' v4.0.7 then once when I was running 3.6.8 (I downgraded). The FirePower team says there was nothing abnormal about their update.  I am running ~ Splunk Enterprise 8.4  Upgrading to eNcore 4.0.9 is not an option (forwarder crashed on that version weeks on that ago, we opened a cisco TAC case and they still haven't been able to tell us what happened). Cisco Secure eStreamer Client (f.k.a Firepower eNcore) Add-On for Splunk https://splunkbase.splunk.com/app/3662/  
I'm using splunk 8.0.3 on a Linux machine. It seems a tar.gz file with the same hash gets re indexed by Splunk.  The only difference that I see is that when I do a 'stat <file>', it shows as Change... See more...
I'm using splunk 8.0.3 on a Linux machine. It seems a tar.gz file with the same hash gets re indexed by Splunk.  The only difference that I see is that when I do a 'stat <file>', it shows as Changed.  The Changed means metadata has changed. Is this behavior documented somewhere? How do I stop Splunk from re indexing this file if only the metadata changed?
After figuring out what was going on, I figured I would feed this back up. The current package has a bug in the extraction User_as_user.  The REPORT pointing to the extraction cannot work, as all of... See more...
After figuring out what was going on, I figured I would feed this back up. The current package has a bug in the extraction User_as_user.  The REPORT pointing to the extraction cannot work, as all of the transforms have to be lower case.  As a result, this REPORT in props doesn't work, and the resulting user field is essentially returning the same thing as if there was a FIELDALIAS instead linking User to user, including any "domain name\username" format. Props.conf has: REPORT-user_for_sysmon = User_as_user and in Transforms.conf [User_as_user] SOURCE_KEY = User REGEX = (?:[^\\]+\\)?(.+) FORMAT = user::"$1" To fix this, change User_as_user --> user_as_user in both places: Props.conf: REPORT-user_for_sysmon = user_as_user and in Transforms.conf [user_as_user] SOURCE_KEY = User REGEX = (?:[^\\]+\\)?(.+) FORMAT = user::"$1"
Splunk sending email alerts for some of my alerts not all of them.  I have scheduled alerts that run each day at specific times.  These alerts run the query at a runtime of 1-10 seconds.  Nothing has... See more...
Splunk sending email alerts for some of my alerts not all of them.  I have scheduled alerts that run each day at specific times.  These alerts run the query at a runtime of 1-10 seconds.  Nothing has changed in the Splunk environment.  I run this command:  index=_* (ERR* OR FAIL* OR WARN* OR CANNOT) (email OR sendemail).  9 results are returned and I find this error.  ERROR:root:(452, '4.3.1 Insufficient system resources (UsedDiskSpace[E:\\Program Files\\Microsoft\\Exchange Server\\V15\\TransportRoles\\data\\Queue])'). I've checked with IT and they stated there are no issues with the exchange server, but like I stated above some alerts work and others do not.  Any guidance you guys can provide would be great.
Good day, This is my first time trying to filter data with props.conf/transform.conf.  Sorry if this post is in the wrong location. This is on a standalone Windows Splunk 8.0.3 box. I have placed ... See more...
Good day, This is my first time trying to filter data with props.conf/transform.conf.  Sorry if this post is in the wrong location. This is on a standalone Windows Splunk 8.0.3 box. I have placed the props.conf/transform.conf in the C:\Program Files\Splunk\etc\system\local directory. The data I want to filter out is the Rhttpproxy data from an ESXi host. <167>2020-11-20T15:12:26.668Z ESX01.test.com Rhttpproxy: verbose rhttpproxy[2101380] [Originator@6876 sub=Proxy Req 11290] Resolved endpoint : [N7Vmacore4Http16LocalServiceSpecE:0x0000005839540e50] _serverNamespace = /vpxa action = Allow _port = 8089 host = 192.168.10.10 process = Rhttpproxy source = tcp:514 sourcetype = syslog =========================== My current config is: props.conf [source::tcp:514] TRANSFORMS-null = setnull transform.conf [setnull] REGEX = rhttpproxy DEST_KEY = queue FORMAT = nullQueue ================================ Things I have tried -- [host::192.168.10.10] TRANSFORMS-null = setnull -- [host::192\.168\.10\.10] TRANSFORMS-null = setnull -- [syslog] TRANSFORMS-null = setnull -- [setnull] REGEX = verbose\srhttpproxy DEST_KEY = queue FORMAT = nullQueue -- [setnull] SOURCE_KEY = field:process REGEX = Rhttpproxy DEST_KEY = queue FORMAT = nullQueue -- I have read the documentation several times, and I am not just understanding it. https://docs.splunk.com/Documentation/Splunk/8.1.0/Admin/Transformsconf https://docs.splunk.com/Documentation/Splunk/8.0.3/Admin/Propsconf Thanks in advance. Aaron    
Hi, I am trying to craft a query that will look for Windows devices that have been rebooted and then have accessed a certain file path to launch a service. I have attempted this by creating a subsea... See more...
Hi, I am trying to craft a query that will look for Windows devices that have been rebooted and then have accessed a certain file path to launch a service. I have attempted this by creating a subsearch for the windows eventid 6005 (reboot), and then passing the computer name associated with that event to the main search for devices that have accessed a filepath. I am not sure how to obtain an output that will show the computer name of a device that has recently had a reboot and also has accessed the file path. Thank you for any help, I am new to Splunk subsearches and appreciate any pointers. index=wineventlog Creator_Process_Name="C:\\Program Files\\XXX\\YYY\\file.exe" [search index=wineventlog  ComputerName="*xyz.local"  EventCode="6005" | stats count by ComputerName, EventCode | fields ComputerName, EventCode] |stats values (?) by EventCode  
I need a user with role "user" to be able to consult the consumption of the license in splunk cloud. When I enter roles, I do not see any option that refers to the license as if it happens on premis... See more...
I need a user with role "user" to be able to consult the consumption of the license in splunk cloud. When I enter roles, I do not see any option that refers to the license as if it happens on premises    
Hello World, In our organization, we have been using AppDynamics as an APM solution. We are now trying to mature into using AppDynamics to monitor a certain exception and restart the application s... See more...
Hello World, In our organization, we have been using AppDynamics as an APM solution. We are now trying to mature into using AppDynamics to monitor a certain exception and restart the application service if that exception occurs. Is there any proven way to do this using the Machine agent or app agent that we have deployed? Any suggestion and help will be much appreciated. Thanks
Hello , I am not getting any result while executing below query. Can you please help me to know what i am doing wrong with the eval command with if condition below.      
I have a index say index1 having Air Details and ServerName of which some Air is missing for some serverNames. I have another index say index2 in this index i am getting the Air details that are mis... See more...
I have a index say index1 having Air Details and ServerName of which some Air is missing for some serverNames. I have another index say index2 in this index i am getting the Air details that are missing in index1. Want to use index2 Air where i dont have values in index 1.
Hello, I have a new index - it's a monster - eating up my disk space. Until I move it to the physical server I need to fix it. Well, I limited maxTotalDataSizeMB, seem working but the cold storage ... See more...
Hello, I have a new index - it's a monster - eating up my disk space. Until I move it to the physical server I need to fix it. Well, I limited maxTotalDataSizeMB, seem working but the cold storage skipped landed in frozen directly, so I cannot search it. The hot/warm storage is "local" on VM, the cold, frozen, thawed is an S3. The optimal idea is 7 days in hot/warm (if over maxTotalDataSizeMB then faster) then go cold for 90 days (no size limit) then thawed for 1 year (no size limit). here is my current setting archiver.enableDataArchive = 0 /opt/splunk/etc/system/default/indexes.conf archiver.maxDataArchiveRetentionPeriod = 0 /opt/splunk/etc/system/default/indexes.conf assureUTF8 = false bucketRebuildMemoryHint = 0 coldPath = /mnt/archive_s3/SPLUNK_DB/indexname/colddb /opt/splunk/etc/system/default/indexes.conf coldPath.maxDataSizeMB = 0 coldToFrozenDir = /mnt/archive_s3/SPLUNK_DB/indexname/Frozenarchive /opt/splunk/etc/system/default/indexes.conf coldToFrozenScript = compressRawdata = 1 /opt/splunk/etc/system/default/indexes.conf datatype = event /opt/splunk/etc/system/default/indexes.conf defaultDatabase = main enableDataIntegrityControl = 0 enableOnlineBucketRepair = 1 /opt/splunk/etc/system/default/indexes.conf enableRealtimeSearch = true enableTsidxReduction = 0 frozenTimePeriodInSecs = 3024000 homePath = $SPLUNK_DB/indexname/db /opt/splunk/etc/system/default/indexes.conf homePath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf hotBucketTimeRefreshInterval = 10 /opt/splunk/etc/system/default/indexes.conf indexThreads = auto /opt/splunk/etc/system/default/indexes.conf journalCompression = gzip /opt/splunk/etc/system/default/indexes.conf maxBloomBackfillBucketAge = 30d /opt/splunk/etc/system/default/indexes.conf maxBucketSizeCacheEntries = 0 maxConcurrentOptimizes = 6 maxDataSize = auto_high_volume maxGlobalDataSizeMB = 0 maxHotBuckets = 10 maxHotIdleSecs = 86400 /opt/splunk/etc/system/default/indexes.conf maxHotSpanSecs = 7776000 maxMemMB = 20 /opt/splunk/etc/system/default/indexes.conf maxMetaEntries = 1000000 /opt/splunk/etc/system/default/indexes.conf maxRunningProcessGroups = 8 /opt/splunk/etc/system/default/indexes.conf maxRunningProcessGroupsLowPriority = 1 /opt/splunk/etc/system/default/indexes.conf maxTimeUnreplicatedNoAcks = 300 /opt/splunk/etc/system/default/indexes.conf maxTimeUnreplicatedWithAcks = 60 maxTotalDataSizeMB = 76800 maxWarmDBCount = 200 /opt/splunk/etc/system/default/indexes.conf memPoolMB = auto minHotIdleSecsBeforeForceRoll = 0 /opt/splunk/etc/system/default/indexes.conf minRawFileSyncSecs = disable /opt/splunk/etc/system/default/indexes.conf minStreamGroupQueueSize = 2000 /opt/splunk/etc/system/default/indexes.conf partialServiceMetaPeriod = 0 /opt/splunk/etc/system/default/indexes.conf processTrackerServiceInterval = 1 /opt/splunk/etc/system/default/indexes.conf quarantineFutureSecs = 2592000 /opt/splunk/etc/system/default/indexes.conf quarantinePastSecs = 77760000 /opt/splunk/etc/system/default/indexes.conf rawChunkSizeBytes = 131072 /opt/splunk/etc/system/default/indexes.conf repFactor = 0 rotatePeriodInSecs = 60 rtRouterQueueSize = rtRouterThreads = selfStorageThreads = /opt/splunk/etc/system/default/indexes.conf serviceInactiveIndexesPeriod = 60 /opt/splunk/etc/system/default/indexes.conf serviceMetaPeriod = 25 /opt/splunk/etc/system/default/indexes.conf serviceOnlyAsNeeded = true /opt/splunk/etc/system/default/indexes.conf serviceSubtaskTimingPeriod = 30 /opt/splunk/etc/system/default/indexes.conf splitByIndexKeys = /opt/splunk/etc/system/default/indexes.conf streamingTargetTsidxSyncPeriodMsec = 5000 /opt/splunk/etc/system/default/indexes.conf suppressBannerList = suspendHotRollByDeleteQuery = 0 /opt/splunk/etc/system/default/indexes.conf sync = 0 syncMeta = 1 thawedPath = /mnt/archive_s3/SPLUNK_DB/indexname/thaweddb /opt/splunk/etc/system/default/indexes.conf throttleCheckPeriod = 15 /opt/splunk/etc/system/default/indexes.conf timePeriodInSecBeforeTsidxReduction = 604800 /opt/splunk/etc/system/default/indexes.conf tsidxReductionCheckPeriodInSec = 600 tsidxWritingLevel = tstatsHomePath = volume:_splunk_summaries/$_index_name/datamodel_summary /opt/splunk/etc/system/default/indexes.conf warmToColdScript = I assume this is the issue coldPath.maxDataSizeMB = 0 why skip cold, but not sure. I appreciated if somebody could fix my settings.  
Hi All, My goal is to mask the Wineventlog:Security  which will be saving us from unnecessary license usage. If we go through the below link under Saving License section it will provide more infor... See more...
Hi All, My goal is to mask the Wineventlog:Security  which will be saving us from unnecessary license usage. If we go through the below link under Saving License section it will provide more information on how to mask those details: https://hurricanelabs.com/splunk-tutorials/leveraging-windows-event-log-filtering-and-design-techniques-in-splunk/    So initially I have created the Private App as per the document provided below: https://docs.splunk.com/Documentation/SplunkCloud/8.1.2008/User/PrivateApps i.e Have created a folder as "Splunk_TA_Wineventlog_Props" and inside that folder I have created the default directory and metadata directory. In the default directory I have created the app.conf, props.conf and transforms.conf as mentioned below in the link. https://hurricanelabs.com/splunk-tutorials/leveraging-windows-event-log-filtering-and-design-techniques-in-splunk/  And in metadata folder  I have created the default.meta as well.   Then when I have zipped the "Splunk_TA_Wineventlog_Props"  and then later converted from zip to tgz. Post which when i tried to upload the Created App in Splunk Cloud and after vetting process i am getting an error as "App validation failed to complete". Unknown failure:  Contact your administrator for  details or try again later.   So kindly let me know where i am missing since as per prerequisite i have created the app and uploaded. But don't know why it is getting an error message during vetting process.      
I want to create a line chart with 3 lines. 2 for percentage trends and 1 for threshold. But how can I make the threshold line on the chart to NOT show data values but in the meantime keep the da... See more...
I want to create a line chart with 3 lines. 2 for percentage trends and 1 for threshold. But how can I make the threshold line on the chart to NOT show data values but in the meantime keep the data values for the other lines?
We have a setup where all users by default have access to all indexes. Now we have to restrict the access to a specific index and give it only to selected users Following this discussion I found the... See more...
We have a setup where all users by default have access to all indexes. Now we have to restrict the access to a specific index and give it only to selected users Following this discussion I found the srchIndexesDisallowed capability listed in the latest authorize.conf user manual ( 8.1.0 ), which made me extremely happy. But I'm having some problems after testing it I have "super" group with srchIndexesAllowed = * srchIndexesDisallowed = indexA and "allowA" group with srchIndexesAllowed = indexA What I expect to happen is: people in the super group have access to all indexes except indexA people in the super and allowA group have access to all indexes ( including indexA ) unfortunately it looks like the srchIndexesDisallowed in super is overwriting the srchIndexesAllowed in allowA I've double-checked and if a user is member only of allowA they can access it I don't imagine this is the intended behavior I'm wondering if someone else has looked into this and figured out a solution ( not counting all the suggestions in the above linked thread )
Hi Everyone! Hope you all are staying safe.. I am currently working on a splunk health dashboard which is using the Status indicator app and .js scripts. 1) The task at hand is to capture the field... See more...
Hi Everyone! Hope you all are staying safe.. I am currently working on a splunk health dashboard which is using the Status indicator app and .js scripts. 1) The task at hand is to capture the fieldname. Referencing the screenshot(highlighted in yellow - Screenshot1), when clicking on "4268" it should capture the value "Anti Virus Protected". Since this is also the "split by" field i tried all options available like click.value,click.name2,click.name1 etc and none of them seem to capture the value "Anti Virus Protected".   Could someone please help me in achieving this? And also help me understand why theclick.name,name2 etc is not working?