All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hard a hard time debugging that one. It only works if your SPL code with subquery return is in a dashboard "base search". <dashboard> <label>My dashboard title</label> <search id="parent_search... See more...
Hard a hard time debugging that one. It only works if your SPL code with subquery return is in a dashboard "base search". <dashboard> <label>My dashboard title</label> <search id="parent_search_1"> <query>``` put your query here with your subquery return $ ```</query> </search> <row> <panel> <table> <title>My child visualization</title> <search base="parent_search_1"> <query>``` have the rest of your query there ```</query>  
Hi I'm not sure if I understand your need, but maybe this helps? | stats sum(CountEvents) as countE by CT | eventstats sum(countE) as Total | eval perc=round(countE*100/Total,2) | chart sum(perc) a... See more...
Hi I'm not sure if I understand your need, but maybe this helps? | stats sum(CountEvents) as countE by CT | eventstats sum(countE) as Total | eval perc=round(countE*100/Total,2) | chart sum(perc) as "EventsPercentages[%]" values(countE) as countE over CT Then in visualisation tab select Format -> Chart Overlay - > Overlay (e.g.  your % field) and View as Axis = On r. Ismo
Hi Splunkers, I have to forward data inside csv files from an on prem HF to Splunk Cloud and I'm facing some issues, cause data seem to not be forwarded. Let me share with you some additional bits. ... See more...
Hi Splunkers, I have to forward data inside csv files from an on prem HF to Splunk Cloud and I'm facing some issues, cause data seem to not be forwarded. Let me share with you some additional bits. Info about data Source data are on a cloud instance (Forcepoint) provided by vendor A script has been provided by vendor to pull data from cloud The script is installed and configured on our Splunk HF Data are saved locally on HF Data are in .csv files  Info about HF configuration We create a new data inputs under Settings -> Data inputs -> Local inputs -> Files & Directories We set as data input the path were .csv are saved after script execution We set the proper sourcetype and index Of course, we configured the HF to send data to Splunk Cloud. We downloaded the file from cloud, from "Universal Forwarder" app and installed it as app on HF: the outputs.conf is proper configured, other data are sent without problem to Splunk cloud (for example, Network input ones goes to Cloud without issues; same for Windows ones) Info about sourcetype and index and their deployment We create a custom addon that simply provide the sourcetype "forcepoint" Sourcetype is configured to extract data from CSV; that means that we set parameter      Indexed_extractions=csv ​     We installed addon on both HF and Splunk Cloud The index, called simply "web", has been created on both HF and Splunk Cloud By thw way, seems that data are not sent from HF to Cloud. So, did I forgot some steps? Or I made wrong some of above ones?  
Hi, Too many hours to solve such a simple question...It is supposed to be a basic thing I want to present both percentages and regular values in bar chart (it can be in the tooltip, like it exi... See more...
Hi, Too many hours to solve such a simple question...It is supposed to be a basic thing I want to present both percentages and regular values in bar chart (it can be in the tooltip, like it exists in a pie chart), If not possible to present only percentages but add the "%" symbol (when I tried to add % it converted the fields to string and nothing was shown in the chart) * I can't add a js script, no access to the server This is my query: | stats sum(CountEvents) by CT | rename "sum(CountEvents)" as "countE" | eventstats sum(countE) as Total | eval perc=round(countE*100/Total,2) | chart sum(perc) as "EventsPercentages[%]" over CT thanks a lot
In System Center dashboard, only *NIX system data is available, not Windows system. I've already install Splunk Add-on for Microsoft Windows and run search with Inventory and Performance data models ... See more...
In System Center dashboard, only *NIX system data is available, not Windows system. I've already install Splunk Add-on for Microsoft Windows and run search with Inventory and Performance data models successfully. When I check the search of the System Center dashboard, it refers to tag All_Inventory.OS.os. When I run this search, it only returns the *NIX system. What can I do to populate the data from Microsoft Windows to System Center dashboard? I've found a link but it seems quite old version https://community.splunk.com/t5/Splunk-Enterprise-Security/Enterprise-Security-System-Center-or-Update-Center-only-have/m-p/136434  All_Inventory.OS.os All_Inventory
Each time you are making very different statements of the requirements.  I am not sure it is worth my time to propose search until you can define your requirements in terms of data examples. (Use cas... See more...
Each time you are making very different statements of the requirements.  I am not sure it is worth my time to propose search until you can define your requirements in terms of data examples. (Use cases.) So, let me clarify one more time with data.  This first set has two task_id's (out of three) that changed event_id (let's forget about increment or decrement for now) during the time period.  There fore this set should not alarm: _time eventg_id event_name task_id 2022-09-04 21:40:39 1274856 pending-transfer 1 2022-09-04 21:35:39 1274856 pending-transfer 2 2022-09-04 21:30:39 1274817 pending-transfer 3 2022-09-04 22:10:39 1274856 pending-transfer 1 2022-09-04 22:05:39 1274748 pending-transfer 2 2022-09-04 22:00:39 1274902 pending-transfer 3 Let me construct a slightly different set in which every event_id has a unchanging task_id _time event_id event_name task_id 2022-09-04 21:40:39 1274856 pending-transfer 1 2022-09-04 21:35:39 1274748 pending-transfer 2 2022-09-04 21:30:39 1274902 pending-transfer 3 2022-09-04 22:10:39 1274856 pending-transfer 1 2022-09-04 22:05:39 1274748 pending-transfer 2 2022-09-04 22:00:39 1274902 pending-transfer 3 With the second set, you want an alert. Do the above sufficiently capture use case requirements?  Here are emulations of the two sets   | makeresults | eval _raw = "Time event_name task_id event_id 9/4/22 09:40:39 PM pending-transfer 1 1274856 9/4/22 09:35:39 PM pending-transfer 2 1274856 9/4/22 09:30:39 PM pending-transfer 3 1274817 9/4/22 10:10:39 PM pending-transfer 1 1274856 9/4/22 10:05:39 PM pending-transfer 2 1274748 9/4/22 10:00:39 PM pending-transfer 3 1274902" | multikv | eval _time = strptime(Time, "%m/%d/%y %I:%M:%S %p") | fields - linecount _raw ``` data emulation set 1 ``` | makeresults | eval _raw = "Time event_name task_id event_id 9/4/22 10:10:39 PM pending-transfer 1 1274856 9/4/22 10:05:39 PM pending-transfer 2 1274748 9/4/22 10:00:39 PM pending-transfer 3 1274902 9/4/22 09:40:39 PM pending-transfer 1 1274856 9/4/22 09:35:39 PM pending-transfer 2 1274748 9/4/22 09:30:39 PM pending-transfer 3 1274902" | multikv | eval _time = strptime(Time, "%m/%d/%y %I:%M:%S %p") | fields - linecount _raw ``` data emulation set 2 ```   If there are additional cases to be differentiated, please play with the emulations and construct differentiation. Again, forget how many alerts you want to send.  Just focus on data input and whether or not a given dataset should trigger alert.
Hi @gmbdrj, Enterprise Security isn't multi tenant! You can use as a workaround, to store data of each customer in a different index, but anyway you have to: create different indexes for each cus... See more...
Hi @gmbdrj, Enterprise Security isn't multi tenant! You can use as a workaround, to store data of each customer in a different index, but anyway you have to: create different indexes for each customer to identify them (it isn't a great job), manually modify all the searches in Correlation Searches (it isn't a great job), manually modify threat intelligence searches and inputs (very hard job!), manually modify identities and assets (hard job!), create specific dashboard for the customers because it isn't possible to modify the main ES dashboards (it isn't a great job). I did it for some of our customers but it wasn't a simple job and I hint to engage Splunk Professional Services. Ciao. Giuseppe
May I ask what is causing this?
    We are running Splunk ES and trying to make log search and app interfaces for each company. Let's call them CompanyA, CompanyB and CompanyC.Each company has to see its own data and also n... See more...
    We are running Splunk ES and trying to make log search and app interfaces for each company. Let's call them CompanyA, CompanyB and CompanyC.Each company has to see its own data and also notable events in ES. As a holding company, we need to access and see all data. What is best way to achieve this goal? Please advise.
Hi @kamlesh_vaghela ,    Good day!    Thanks for your response! Here is my XML after added the changes that you mentioned still, I didn't get the expected one, the color of info icon is not chan... See more...
Hi @kamlesh_vaghela ,    Good day!    Thanks for your response! Here is my XML after added the changes that you mentioned still, I didn't get the expected one, the color of info icon is not changed. <panel id="global_status_op"> <title>Global Compliance</title> <html> <style> .dashboard-body { background: #0000FF !important; } .dashboard-header h2{ color: #0000FF !important; } </style> <div class="infobutton" parent="global_status_op" type="collapse" style="display: none"> <p style="font-size:15pt;"> The compliance is calculated as follow:</p> <p style="font-size:9pt;"> - If compliant, the asset is considered as complinat</p> <p style="font-size:9pt;"> - If not compliant, the asset is considered as not compliant</p> </div> </html> </panel>
  How to calculate the centroid of each cluser after using KMeans clustering algorithm? I have tried the following but none of them worked:  1 - | inputlookup iris.csv | fit KMeans k=3 petal* |ev... See more...
  How to calculate the centroid of each cluser after using KMeans clustering algorithm? I have tried the following but none of them worked:  1 - | inputlookup iris.csv | fit KMeans k=3 petal* |eval point_size = 1 | appendpipe [| stats mean(petal*) as petal* by cluster | eval species = "Centroid: ".cluster | eval point_size = 2] | fields species petal* point_size     2- showcentroid = t
Hi probably "rm -f <AXML file>"  is the only option for you? And remember that never try to add search app to Deployer to clean up these!!! Have you try to look that via "Settings -> All Configurat... See more...
Hi probably "rm -f <AXML file>"  is the only option for you? And remember that never try to add search app to Deployer to clean up these!!! Have you try to look that via "Settings -> All Configurations"? With some objects (e.g. users private) that could help you. My advise for future is never put any KO inside Search app. You always should use separate app(s) for your own KOs and manage those via Deployer.  r. Ismo
Hi here is old answer about SQL join equivalence in SPL. https://community.splunk.com/t5/Splunk-Search/What-is-the-relation-between-the-Splunk-inner-left-join-and-the/m-p/391288/thread-id/113948 r.... See more...
Hi here is old answer about SQL join equivalence in SPL. https://community.splunk.com/t5/Splunk-Search/What-is-the-relation-between-the-Splunk-inner-left-join-and-the/m-p/391288/thread-id/113948 r. Ismo
Hi @nytins, at first, using yesterday as Time Range, if you schedule your alert at 10:00 and at 19:00 you have the same result in both the runs. For the issue, what does it happen if you use "Once"... See more...
Hi @nytins, at first, using yesterday as Time Range, if you schedule your alert at 10:00 and at 19:00 you have the same result in both the runs. For the issue, what does it happen if you use "Once"? Then are you shure that the Trigger action you configured can manage more than one result? I don't know PagerDuty. Ciao. Giuseppe
Hi Have you already look from internal logs what has happened? There should be entries about fire of this alert. r. Ismo
Hi as @richgalloway already said normal TTL for queries is 10min and you could extend it and give permission to your search with Job -> Edit Job Settings ... Max values is 7day which should be enoug... See more...
Hi as @richgalloway already said normal TTL for queries is 10min and you could extend it and give permission to your search with Job -> Edit Job Settings ... Max values is 7day which should be enough long time for debug and resolve issues. And if it didn't enough for you, just run your search again and then extend it again to 7 days. If you want to extend that default 10min TTL you can do it with https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf#TTL. You must remember that this is global attribute and it affects for all. You could increase the value of "ttl"  ttl = <integer> * How long, in seconds, the search artifacts should be stored on disk after the job completes. The ttl is computed relative to the modtime of the status.csv file of the job, if the file exists, or the modtime of the artifact directory for the search job. * If a job is being actively viewed in the Splunk UI then the modtime of the status.csv file is constantly updated such that the reaper does not remove the job from underneath. * Default: 600 (10 minutes) BUT as it said, it keeps those search jobs files on disk until that time has past. In active systems with lot of users this means quite much disk space under /opt/splunk/var on SH side which of course affect your system performance. Personally I keep that TTL in quite small value, usually default and if/when needed extend TTL for individual search TTL only. There is also undocumented (mainly for internal debug purpose) noop command which you could use on SPL to extend TTL for individual search.    index = _internal | noop set_ttl = 18000 | head 1   Above noop extend TTL to 5h for this job instead of normal 10min. r. Ismo
Hi @yuvrajsharma_13, as I said, if the issue is that the difference is showed as a date, you can use the tostring option to show in hours, minutes and seconds. for the missing UNIQUE_ID, you found ... See more...
Hi @yuvrajsharma_13, as I said, if the issue is that the difference is showed as a date, you can use the tostring option to show in hours, minutes and seconds. for the missing UNIQUE_ID, you found the solution. let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
One remarks. As @gcusello said you don't lost any data which are based on files. But there are some scripts which are using commands like ps, netstats, sar etc. to collect data by periods. When your ... See more...
One remarks. As @gcusello said you don't lost any data which are based on files. But there are some scripts which are using commands like ps, netstats, sar etc. to collect data by periods. When your TA is not on place and those inputs are not run, you obviously lost those event as those inputs scripts haven't run. But that should be any real issue.
This is how it works with b64 encoding index=_internal | head 1 | decrypt field=splunk_server btoa() | eval foo=decrypted | decrypt field=foo b64() | table splunk_server foo decrypted You must rem... See more...
This is how it works with b64 encoding index=_internal | head 1 | decrypt field=splunk_server btoa() | eval foo=decrypted | decrypt field=foo b64() | table splunk_server foo decrypted You must remember that it use field decrypted as output and it didn't change the original field. Here is what functions it support https://splunkbase.splunk.com/app/5565 (Tab Details). 
I apologize for the confusion. I will try my best to explain it better. For example,   Event_name = pending-transfer number of task_id's that event_name (pending-transfer) has "3"   Below table... See more...
I apologize for the confusion. I will try my best to explain it better. For example,   Event_name = pending-transfer number of task_id's that event_name (pending-transfer) has "3"   Below table contains the  event_id's recieved by the "pending-transfer" for different task_id's at 9:30 PM Table 1 Time event_name task_id event_id 9/4/22 09:40:39 PM pending-transfer 1 1274856 9/4/22 09:35:39 PM pending-transfer 2 1274856 9/4/22 09:30:39 PM pending-transfer 3 1274817     At 10:00 PM, there are new event_id's for different task_id's for "pending-transfer" as shown below. Table 2 Time event_name task_id event_id 9/4/22 10:10:39 PM pending-transfer 1 1274856 9/4/22 10:05:39 PM pending-transfer 2 1274748 9/4/22 10:00:39 PM pending-transfer 3 1274902   For task_id = 1 , there is no change in the event_id (1274856) for event_id arrived at 10:10 PM compared to the previous event_id at 9:40 PM whereas for other task_id's (task_id=2, task_id=3) there is a change in the event_id.  Therefore, alert needs to be generated since there is no change in event_id for task_id=1.  So, logic needs to check if there is a change in event_id for ALL task_id's in an event_name and if there is NO change in event_id for ANY of task_id's in an event_name, then  alert needs to be triggered.   I will be creating the alert for each event_name by using where clause. splun query | where event_name = "pending-transfer"   However, I am not planning to create alert for each specific task_id in the event_name as it lead to so many alerts. splunk query | where event_name = "pending-transfer" task_id=1 splunk query | where event_name = "pending-transfer" task_id=2 splunk query | where event_name = "pending-transfer" task_id=3     Thank You