All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I was able to get things to work with makeresults and a mocked up Dashboard.  How does this work for you on your end:     <form version="1.1" theme="dark"> <label>Test Dashboard</label> <field... See more...
I was able to get things to work with makeresults and a mocked up Dashboard.  How does this work for you on your end:     <form version="1.1" theme="dark"> <label>Test Dashboard</label> <fieldset submitButton="false"> <input type="dropdown" token="system_id" searchWhenChanged="true"> <label>system_id</label> <choice value="*">*</choice> <choice value="AA-1">AA-1</choice> <choice value="AA-2">AA-2</choice> <choice value="AA-10">AA-10</choice> <initialValue>*</initialValue> <default>*</default> </input> </fieldset> <row> <panel> <table> <search> <query>| makeresults format="json" data="[{\"system_id\":\"AA-1\"}, {\"system_id\":\"AA-2\"}, {\"system_id\":\"AA-10\"}, {\"system_id\":\"AA-15\"}, {\"system_id\":\"AA-1\"}, {\"system_id\":\"AA-123\"}, {\"system_id\":\"aa-1-a\"}]" , {\"system_id\":\"aa-1-b\"} | search system_id="$system_id$"</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>  
There is a difference between "is supported" and "will work on". 9.0.1 was released before (or roughly at the same time) as 6.0 kernel so it's not on an officially supported list. It doesn't matter t... See more...
There is a difference between "is supported" and "will work on". 9.0.1 was released before (or roughly at the same time) as 6.0 kernel so it's not on an officially supported list. It doesn't matter thought that it won't work. It's a userland program so it should be relatively independent on the kernel version.
Hi @Sthitha.Madhiraju, Check out this existing TKB article https://community.appdynamics.com/t5/Knowledge-Base/Why-is-the-Java-Agent-not-reporting-to-the-Controller/ta-p/13974 Let me know if it h... See more...
Hi @Sthitha.Madhiraju, Check out this existing TKB article https://community.appdynamics.com/t5/Knowledge-Base/Why-is-the-Java-Agent-not-reporting-to-the-Controller/ta-p/13974 Let me know if it helps. 
These are the officially available types of license. https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/TypesofSplunklicenses There is no "edu" license as such. There could be special pricing... See more...
These are the officially available types of license. https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/TypesofSplunklicenses There is no "edu" license as such. There could be special pricing for edu organizations but you should verify it with sales representative, not on community forum
A couple steps to troubleshoot: - If you remove the SSL, can you get Splunk to startup and listen on that port?   - Are your paths 100% correct - this could be related to a typo in the path/filen... See more...
A couple steps to troubleshoot: - If you remove the SSL, can you get Splunk to startup and listen on that port?   - Are your paths 100% correct - this could be related to a typo in the path/filename. - Do your certificates have the correct permissions so Spunk can see them?   As a side note, Splunk will auto-encrypt passwords like that in your .conf files. You'll see the following wording for values it does this with in the documentation (e.g. inputs.conf sslPassword documentation) Upon first use, the input encrypts and rewrites the password  
Then you need to read this: * key=regex format: * A whitespace-separated list of Event Log components to match, and regular expressions to match against against them. * There can be one matc... See more...
Then you need to read this: * key=regex format: * A whitespace-separated list of Event Log components to match, and regular expressions to match against against them. * There can be one match expression or multiple expressions per line. * The key must belong to the set of valid keys provided in the "Valid keys for the key=regex format" section. * The regex consists of a leading delimiter, the regex expression, and a trailing delimiter. Examples: %regex%, *regex*, "regex" * When multiple match expressions are present, they are treated as a logical AND. In other words, all expressions must match for the line to apply to the event. * If the value represented by the key does not exist, it is not considered a match, regardless of the regex. * Example: whitelist = EventCode=%^200$% User=%jrodman% Include events only if they have EventCode 200 and relate to User jrodman # Valid keys for the key=regex format: * The following keys are equivalent to the fields that appear in the text of the acquired events: * Category, CategoryString, ComputerName, EventCode, EventType, Keywords, LogName, Message, OpCode, RecordNumber, Sid, SidType, SourceName, TaskCategory, Type, User * There are three special keys that do not appear literally in the event. * $TimeGenerated: The time that the computer generated the event * $Timestamp: The time that the event was received and recorded by the Event Log service. What's important is that you specify which field the regex is to be applied to and that it needs to be enclosed in delimiters.
This is not Splunk Support. This is a community-driven forum.
My guess at what is driving these overages is something didn't get indexed during that time that you had your outage scenario.  For example, if an Indexer runs out of disk space, that will bubble out... See more...
My guess at what is driving these overages is something didn't get indexed during that time that you had your outage scenario.  For example, if an Indexer runs out of disk space, that will bubble out to your edge tiers.  So a Forwarder might pause reading a log file, for example.  When things come back on-line, everything tries to catch up, but you end up in a scenario that in one day you might index three days of data. To see if this is what was happening to you, you can do a quick check of what your license usage was over time.  If it was really low during your outage, then pegged once you were back up, then this scenario is the most likely.    Another way to look for this scenario is to search your indexes and compare the values of _time and _indextime.  The field _time is the timestamp of when the Event occured (often times the timestamp within the event data).  The field _indextime is the timestamp of when Splunk indexed the data.  Under normal conditions the variance between these fields should be small. You could run a quick search that calculates the avg diff of _time and _indextime by hour over the time of your outage (and include some data before and after the actual outage to get a sense of your boundaries).  If you see a large avg difference during your outage period this also would tell you that Splunk was "catching up" and that's what caused your overages.
I'm not fully sure what you want to achieve, especially with this dedup. I tend to avoid this command altogether because it leaves only first occurrence of given field value, regardless of the actual... See more...
I'm not fully sure what you want to achieve, especially with this dedup. I tend to avoid this command altogether because it leaves only first occurrence of given field value, regardless of the actual order of events at given point of the search pipeline. If - assuming that your search makes sense - you want to just find all those events for which first occurrence of given MAC was over 24h ago, use eventstats to find min(_time) by RogueMACAddress and then you can do "where" to find those that are lower than now()-86400.
Hi @Yann.Buccellato,  There have been changes to the doc. Added the following statement as Step 1 in "Before You Begin the Integration" section: "Create an AppDynamics Discovery Source in yo... See more...
Hi @Yann.Buccellato,  There have been changes to the doc. Added the following statement as Step 1 in "Before You Begin the Integration" section: "Create an AppDynamics Discovery Source in your ServiceNow® instance." Moved the original Step 1 as Step 2. @Joe.Catera  also shared the SaaS doc. Are you able to proceed with your work now?
Hi @gcusello  What I'm looking to do is find any duplicate occurrence of a RogueApMacAddress (any particular value that repeats more than once) within 24 hours. Including the command you provided wo... See more...
Hi @gcusello  What I'm looking to do is find any duplicate occurrence of a RogueApMacAddress (any particular value that repeats more than once) within 24 hours. Including the command you provided wouldn't help with that. But I hope you understood what I'm trying to do. Let me know if not!
The lookup can be there, but it might not be defined as an Automatic Lookup.  Take a look at your lookup configurations for the Sysmon app - an automatic lookup could be defined there and disabled.  ... See more...
The lookup can be there, but it might not be defined as an Automatic Lookup.  Take a look at your lookup configurations for the Sysmon app - an automatic lookup could be defined there and disabled.  You can also define your own. 
I want a dashboard panel to display the line chart, the query should be run in the background every 30 minutes to an hour and update the display.  So I assume I mean that I want to directly execute t... See more...
I want a dashboard panel to display the line chart, the query should be run in the background every 30 minutes to an hour and update the display.  So I assume I mean that I want to directly execute the query on the DB and display results.  I've created the Input.  Do I need to create an Output?  I don't want to update the database at all, just read it.
I am pretty new to ES correlation seraches and I am trying to figure out how to add additionals fields to notable events to make it esier to investigate. We have this correlation serach enabled "ESCU... See more...
I am pretty new to ES correlation seraches and I am trying to figure out how to add additionals fields to notable events to make it esier to investigate. We have this correlation serach enabled "ESCU - Detect New Local Admin account - Rule" `wineventlog_security` EventCode=4720 OR (EventCode=4732 Group_Name=Administrators) | transaction member_id connected=false maxspan=180m | rename member_id as user | stats count min(_time) as firstTime max(_time) as lastTime by user dest | `security_content_ctime(firstTime)`| `security_content_ctime(lastTime)` | `detect_new_local_admin_account_filter` When I run the above serach using the search and reporting app I get way more fields than what I see on the Additional Fields from the notable itself. for example, in the notable event the User field shows the SID and no other fields to idenity the actual username. To fix this I could add the field  Account_Name that shows when I  run the above serach from the search and reporting app.  I tried adding that field by going into Configure -> Incident Management -> Incidnet Review Settings -> Incident Review - Event Attributes. But it is still not showing. Am I missing something here? 
@gcusello Where do I put those blocks of code you sent?  Do I put them somewhere in the dashboard builder, or in the DBConnect for the Input?  Like I said, I'm very new to this.
I don't know PowerShell logs...but in a situation like this I would set the Selected to Yes for the fields you're trying to figure out.  Based on your screen shots, those fields appear for 100% of yo... See more...
I don't know PowerShell logs...but in a situation like this I would set the Selected to Yes for the fields you're trying to figure out.  Based on your screen shots, those fields appear for 100% of your events.  When you set that to Yes, you will see the field & value appear with each event in your results.  Then you can try and match up what the value is with the text that's there in the event. But - also keep in mind there could be calculated events, too.  For example, MessageTotal might be the # of bytes in the event, and won't actually appear within the data.  Having them displayed with each event will help you deduce what they might represent, though - if MessageTotal was 1 for a whole bunch of 1-byte events, then you know your answer.
Thank you!
Hi @dgwann, let me understand: do you want to use DB-Connect to extract data from a DB saving them in an index and then search on the index or directly execute the query run-time on the DB and displ... See more...
Hi @dgwann, let me understand: do you want to use DB-Connect to extract data from a DB saving them in an index and then search on the index or directly execute the query run-time on the DB and display results? the second option doesn't consume license but surely will have very low performces because Splunk DB-Connect was created to extract data from a DB wexecuting query, not to execute runtime queries. When you'll have the data in an index you can create your search using SPL, something like this:   index=your_index | stats sum(Number_of_Submissions) AS Number_of_Submissions BY Date Group_name   if instead you want to run on line queries (I not hint this because it will be more than slowest!), you should try something like this:   | dbxquery query="<your_query>" connection="<mySQL<" | stats sum(Number_of_Submissions) AS Number_of_Submissions BY Date Group_name   for more details see at https://docs.splunk.com/Documentation/DBX/3.14.1/DeployDBX/Commands Ciao. Giuseppe
I have a DBConnect Input defined that produces the following output: Date Group_Name Number_of_Submissions 2023-10-02 Apple 780 2023-10-03 Apple 1116 2023-10-04 Apple 1154 2... See more...
I have a DBConnect Input defined that produces the following output: Date Group_Name Number_of_Submissions 2023-10-02 Apple 780 2023-10-03 Apple 1116 2023-10-04 Apple 1154 2023-10-05 Apple 786 2023-10-06 Apple 699 2023-10-02 Banana 358 2023-10-03 Banana 760 2023-10-04 Banana 254 2023-10-05 Banana 1009 2023-10-06 Banana 876 2023-10-02 Others 1265 2023-10-03 Others 1400 2023-10-04 Others 257 2023-10-05 Others 109 2023-10-06 Others 1709   I want to have this data displayed on a Dashboard as a multi-line chart, x-axis is the Date, y-axis is the Number of submissions, and there should be different color lines representing the different groups.  I am new to Splunk.  Very new.  I need succinct instructions pls.  Thanks!!!