All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As @bowesmana said (and does the articles he referenced, and many others on this subject), your calculation to determine the colour should be done in SPL, so try modifying your search accordingly.
What does "link" mean in this context? The second query doesn't return any trace ids. Please clarify what you are trying to do (in non=SPL terms, provide some sample events, and a representation of y... See more...
What does "link" mean in this context? The second query doesn't return any trace ids. Please clarify what you are trying to do (in non=SPL terms, provide some sample events, and a representation of your expected output.
We don't know your data, we don't know your config. But my shot would be that your data is not properly onboarded - you don't have a proper configuration for this type of source so Splunks tries by ... See more...
We don't know your data, we don't know your config. But my shot would be that your data is not properly onboarded - you don't have a proper configuration for this type of source so Splunks tries by default to extract key-value pairs and does it with its own built-in mechanics which ends up as you can see. FireEyes can be painful to set up. Try to avoid CEF altogether - it's not very nice to parse.
As @ITWhisperer said - show us your raw events and what have you tried so far because maybe your idea was OK but applied in a wrong place.
Hi @splunkguy , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
1. Are you sure you're using the webhook inputs app or did you just configure a HEC input? 2. Whatever that ngrok is - since you said that Splunk is listening on localhost - is it running on the sam... See more...
1. Are you sure you're using the webhook inputs app or did you just configure a HEC input? 2. Whatever that ngrok is - since you said that Splunk is listening on localhost - is it running on the same machine? 3. Did you verify if that ngrok is connecting to your Splunk instance and sending data?
I have installed free Splunk enterprise in my local system and It can be accessed via localhost:8000 I have also configured the webhook receiver in this instance to run at port 8088 via the HTTP eve... See more...
I have installed free Splunk enterprise in my local system and It can be accessed via localhost:8000 I have also configured the webhook receiver in this instance to run at port 8088 via the HTTP event collector settings I tried ngrok to expose localhost:8000 and localhost:8088 and use that public URL as a webhook listening server. But Splunk is not receiving any events. I can see my ngrok server being hit with the events but seems like it's not able to forward it over to splunk. what am I doing wrong here? What's the right way to expose my localhost Splunk instance to start receiving these webhook events? Thank you in advance for help! Webhooks Input #splunklocalhost
Hello Members,   i have data coming from HF indexed in indexer and i can search it the problem at the details of event    for example : event sample cs4=FIREEYE test when i see the details of th... See more...
Hello Members,   i have data coming from HF indexed in indexer and i can search it the problem at the details of event    for example : event sample cs4=FIREEYE test when i see the details of this event i see cs4=FIREEYE only first string other is truncated why?    
Hi   thanks for the response. If i can reindex the data how to apply line breaking settings effficiently to achieve this
If I have two queries: 1. index=poc container_name=app horizontalId=orange outputs events with the trace ids 2. index=poc container_name=app ExecutionTimeAspect Elastic Vertical Search Quer... See more...
If I have two queries: 1. index=poc container_name=app horizontalId=orange outputs events with the trace ids 2. index=poc container_name=app ExecutionTimeAspect Elastic Vertical Search Query Service | rex field=_raw "execution time is[ ]+(?<latency>\d+)[ ]+ms" | stats p90(latency) as Latency outputs a Latency = 845 I want to link output of query 2 and query 1 via the trace ids for the P90 Latency.
There are several different issues touched here. As you have already indexed data, you cannot break the events again and re-index them. You can, however manipulate your data during searching. But yo... See more...
There are several different issues touched here. As you have already indexed data, you cannot break the events again and re-index them. You can, however manipulate your data during searching. But you will have to "break" the data into separate results on each search explicitly using search commands. If you want newly ingested data properly broken and indexed as separate events you need to configure your ingestion settings properly. But that will only work on newly ingested data. Old data will stay as it was.
Hi @bowesmana  I tried our suggestion  but doesn't works maybe I wrong something?   <dashboard version="1.1" theme="light"> <label>ID patient</label> <row> <panel> <html depends="$hi... See more...
Hi @bowesmana  I tried our suggestion  but doesn't works maybe I wrong something?   <dashboard version="1.1" theme="light"> <label>ID patient</label> <row> <panel> <html depends="$hidden$"> <style> #coloured_cell table tbody td div.multivalue-subcell[data-mv-index="0"]{ display: none; } </style> </html> <table id="coloured_cell"> <search> <query>sourcetype=csv | eval value=mvappend(sex,'S_N mm') | table Age id sex "S_N mm" N_S_Ba value</query> <earliest>0</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">row</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="number" field="id"></format> <format type="color" field="value"> <colorPalette type="expression">case(mvindex(value, 1) &gt;"79" AND mvindex(value, 0) == "male","#00FF00")</colorPalette> </format> </table> </panel> </row> </dashboard>
Actually, with Linux in general, everything needs "some work" to be done. A"Linux box" is a very broad term and a Linux server can be based on one of many different distributions (or even be install... See more...
Actually, with Linux in general, everything needs "some work" to be done. A"Linux box" is a very broad term and a Linux server can be based on one of many different distributions (or even be installed as LFS), can be configured in a gazillion different ways so while you could cover some typical cases (like RHEL9/default install/default rsyslog configuration), there is no way to cover "any Linux". Also remember that audit logs depend greatly (mostly, if not exclusively) on which audit rules you have defined in your system.
SELinux has nothing to do with firewalld in the sense that adding a rule to firewalld should work regardless of SELinux status - the rule should show. True, SELinux coud prevent the process from proc... See more...
SELinux has nothing to do with firewalld in the sense that adding a rule to firewalld should work regardless of SELinux status - the rule should show. True, SELinux coud prevent the process from processing connection but that's completely independent from firewalld.
Adding to what @richgalloway already said - remember than some options might simply be not available in specific situations. Many Splunk components will actually run without web interface enabled so ... See more...
Adding to what @richgalloway already said - remember than some options might simply be not available in specific situations. Many Splunk components will actually run without web interface enabled so in those cases you will obviously not be able to use it for upgrade. If your environment grows and you step into the clustering grounds the only way of installing apps (including upgrading) will be using clustering mechanisms (either pushing from deployer or cluster manager). Even with small-scale installation you can use deployment server to serve apps to your Splunk components. And that's actually a typical Splunk way of automating app install/upgrade.
87 views of my question and zero comments.  Same question posted by a different person 3 years ago and no answer to them either.  It seems like no one responsible for this product actually looks at... See more...
87 views of my question and zero comments.  Same question posted by a different person 3 years ago and no answer to them either.  It seems like no one responsible for this product actually looks at the questions.  A reply that states "this is not possible or will never be implemented" is preferable to complete silence.  Even better would be acknowledging the use case as valid and committing to adding the feature or providing a workaround such as code that could be inserted into the app configuration to enable it as a custom feature.
Did this ever get resolved?  I applied blacklist = \.gz$ and it is not working for me.
I am trying to be able to show the results of the drilldown search of a notable without having to leave the event/case page.  I am able to grab the drilldown search and send it back to Splunk using ... See more...
I am trying to be able to show the results of the drilldown search of a notable without having to leave the event/case page.  I am able to grab the drilldown search and send it back to Splunk using the 'run_query' command and receive the information but regardless of what fields I put in the "display" field of the command nothing shows up in the widget and attempting to create a new artifact with the data throws errors around it not being correctly formatted Json.  Does anyone have a best practice to show the results of a SPL query within Splunk SOAR within the event that it was run on? 
Hi there, looks like it is a known issue for this version, Splunk dev team is working on it and will be fixed in next releases.  As a workaround you can try  1. Open Settings menu and select Adva... See more...
Hi there, looks like it is a known issue for this version, Splunk dev team is working on it and will be fixed in next releases.  As a workaround you can try  1. Open Settings menu and select Advanced Search 2. Next, select Macros 3. Search for dmc_licensing_base_summary. The app must be monitoring console or All. You won't find the macro otherwise. 4. Click on the macro to edit it. 5. In the definition box, change pool="$pool_clause$" to "$pool_clause$" only 6. Save the macro and reload the Historic License Usage dashboard in monitoring console.
This works. | makeresults | fields - _time | eval hosts="$servers_entered$" | makemv delim="," hosts | eval count=mvcount(hosts) | table count