All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gmbdrj, Enterprise Security isn't multi tenant! You can use as a workaround, to store data of each customer in a different index, but anyway you have to: create different indexes for each cus... See more...
Hi @gmbdrj, Enterprise Security isn't multi tenant! You can use as a workaround, to store data of each customer in a different index, but anyway you have to: create different indexes for each customer to identify them (it isn't a great job), manually modify all the searches in Correlation Searches (it isn't a great job), manually modify threat intelligence searches and inputs (very hard job!), manually modify identities and assets (hard job!), create specific dashboard for the customers because it isn't possible to modify the main ES dashboards (it isn't a great job). I did it for some of our customers but it wasn't a simple job and I hint to engage Splunk Professional Services. Ciao. Giuseppe
May I ask what is causing this?
    We are running Splunk ES and trying to make log search and app interfaces for each company. Let's call them CompanyA, CompanyB and CompanyC.Each company has to see its own data and also n... See more...
    We are running Splunk ES and trying to make log search and app interfaces for each company. Let's call them CompanyA, CompanyB and CompanyC.Each company has to see its own data and also notable events in ES. As a holding company, we need to access and see all data. What is best way to achieve this goal? Please advise.
Hi @kamlesh_vaghela ,    Good day!    Thanks for your response! Here is my XML after added the changes that you mentioned still, I didn't get the expected one, the color of info icon is not chan... See more...
Hi @kamlesh_vaghela ,    Good day!    Thanks for your response! Here is my XML after added the changes that you mentioned still, I didn't get the expected one, the color of info icon is not changed. <panel id="global_status_op"> <title>Global Compliance</title> <html> <style> .dashboard-body { background: #0000FF !important; } .dashboard-header h2{ color: #0000FF !important; } </style> <div class="infobutton" parent="global_status_op" type="collapse" style="display: none"> <p style="font-size:15pt;"> The compliance is calculated as follow:</p> <p style="font-size:9pt;"> - If compliant, the asset is considered as complinat</p> <p style="font-size:9pt;"> - If not compliant, the asset is considered as not compliant</p> </div> </html> </panel>
  How to calculate the centroid of each cluser after using KMeans clustering algorithm? I have tried the following but none of them worked:  1 - | inputlookup iris.csv | fit KMeans k=3 petal* |ev... See more...
  How to calculate the centroid of each cluser after using KMeans clustering algorithm? I have tried the following but none of them worked:  1 - | inputlookup iris.csv | fit KMeans k=3 petal* |eval point_size = 1 | appendpipe [| stats mean(petal*) as petal* by cluster | eval species = "Centroid: ".cluster | eval point_size = 2] | fields species petal* point_size     2- showcentroid = t
Hi probably "rm -f <AXML file>"  is the only option for you? And remember that never try to add search app to Deployer to clean up these!!! Have you try to look that via "Settings -> All Configurat... See more...
Hi probably "rm -f <AXML file>"  is the only option for you? And remember that never try to add search app to Deployer to clean up these!!! Have you try to look that via "Settings -> All Configurations"? With some objects (e.g. users private) that could help you. My advise for future is never put any KO inside Search app. You always should use separate app(s) for your own KOs and manage those via Deployer.  r. Ismo
Hi here is old answer about SQL join equivalence in SPL. https://community.splunk.com/t5/Splunk-Search/What-is-the-relation-between-the-Splunk-inner-left-join-and-the/m-p/391288/thread-id/113948 r.... See more...
Hi here is old answer about SQL join equivalence in SPL. https://community.splunk.com/t5/Splunk-Search/What-is-the-relation-between-the-Splunk-inner-left-join-and-the/m-p/391288/thread-id/113948 r. Ismo
Hi @nytins, at first, using yesterday as Time Range, if you schedule your alert at 10:00 and at 19:00 you have the same result in both the runs. For the issue, what does it happen if you use "Once"... See more...
Hi @nytins, at first, using yesterday as Time Range, if you schedule your alert at 10:00 and at 19:00 you have the same result in both the runs. For the issue, what does it happen if you use "Once"? Then are you shure that the Trigger action you configured can manage more than one result? I don't know PagerDuty. Ciao. Giuseppe
Hi Have you already look from internal logs what has happened? There should be entries about fire of this alert. r. Ismo
Hi as @richgalloway already said normal TTL for queries is 10min and you could extend it and give permission to your search with Job -> Edit Job Settings ... Max values is 7day which should be enoug... See more...
Hi as @richgalloway already said normal TTL for queries is 10min and you could extend it and give permission to your search with Job -> Edit Job Settings ... Max values is 7day which should be enough long time for debug and resolve issues. And if it didn't enough for you, just run your search again and then extend it again to 7 days. If you want to extend that default 10min TTL you can do it with https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf#TTL. You must remember that this is global attribute and it affects for all. You could increase the value of "ttl"  ttl = <integer> * How long, in seconds, the search artifacts should be stored on disk after the job completes. The ttl is computed relative to the modtime of the status.csv file of the job, if the file exists, or the modtime of the artifact directory for the search job. * If a job is being actively viewed in the Splunk UI then the modtime of the status.csv file is constantly updated such that the reaper does not remove the job from underneath. * Default: 600 (10 minutes) BUT as it said, it keeps those search jobs files on disk until that time has past. In active systems with lot of users this means quite much disk space under /opt/splunk/var on SH side which of course affect your system performance. Personally I keep that TTL in quite small value, usually default and if/when needed extend TTL for individual search TTL only. There is also undocumented (mainly for internal debug purpose) noop command which you could use on SPL to extend TTL for individual search.    index = _internal | noop set_ttl = 18000 | head 1   Above noop extend TTL to 5h for this job instead of normal 10min. r. Ismo
Hi @yuvrajsharma_13, as I said, if the issue is that the difference is showed as a date, you can use the tostring option to show in hours, minutes and seconds. for the missing UNIQUE_ID, you found ... See more...
Hi @yuvrajsharma_13, as I said, if the issue is that the difference is showed as a date, you can use the tostring option to show in hours, minutes and seconds. for the missing UNIQUE_ID, you found the solution. let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
One remarks. As @gcusello said you don't lost any data which are based on files. But there are some scripts which are using commands like ps, netstats, sar etc. to collect data by periods. When your ... See more...
One remarks. As @gcusello said you don't lost any data which are based on files. But there are some scripts which are using commands like ps, netstats, sar etc. to collect data by periods. When your TA is not on place and those inputs are not run, you obviously lost those event as those inputs scripts haven't run. But that should be any real issue.
This is how it works with b64 encoding index=_internal | head 1 | decrypt field=splunk_server btoa() | eval foo=decrypted | decrypt field=foo b64() | table splunk_server foo decrypted You must rem... See more...
This is how it works with b64 encoding index=_internal | head 1 | decrypt field=splunk_server btoa() | eval foo=decrypted | decrypt field=foo b64() | table splunk_server foo decrypted You must remember that it use field decrypted as output and it didn't change the original field. Here is what functions it support https://splunkbase.splunk.com/app/5565 (Tab Details). 
I apologize for the confusion. I will try my best to explain it better. For example,   Event_name = pending-transfer number of task_id's that event_name (pending-transfer) has "3"   Below table... See more...
I apologize for the confusion. I will try my best to explain it better. For example,   Event_name = pending-transfer number of task_id's that event_name (pending-transfer) has "3"   Below table contains the  event_id's recieved by the "pending-transfer" for different task_id's at 9:30 PM Table 1 Time event_name task_id event_id 9/4/22 09:40:39 PM pending-transfer 1 1274856 9/4/22 09:35:39 PM pending-transfer 2 1274856 9/4/22 09:30:39 PM pending-transfer 3 1274817     At 10:00 PM, there are new event_id's for different task_id's for "pending-transfer" as shown below. Table 2 Time event_name task_id event_id 9/4/22 10:10:39 PM pending-transfer 1 1274856 9/4/22 10:05:39 PM pending-transfer 2 1274748 9/4/22 10:00:39 PM pending-transfer 3 1274902   For task_id = 1 , there is no change in the event_id (1274856) for event_id arrived at 10:10 PM compared to the previous event_id at 9:40 PM whereas for other task_id's (task_id=2, task_id=3) there is a change in the event_id.  Therefore, alert needs to be generated since there is no change in event_id for task_id=1.  So, logic needs to check if there is a change in event_id for ALL task_id's in an event_name and if there is NO change in event_id for ANY of task_id's in an event_name, then  alert needs to be triggered.   I will be creating the alert for each event_name by using where clause. splun query | where event_name = "pending-transfer"   However, I am not planning to create alert for each specific task_id in the event_name as it lead to so many alerts. splunk query | where event_name = "pending-transfer" task_id=1 splunk query | where event_name = "pending-transfer" task_id=2 splunk query | where event_name = "pending-transfer" task_id=3     Thank You
Our specific requirement is to have links to share with others, say while troubleshooting an issue, which can then be used even after weeks to come back to the same exact search result. So creating... See more...
Our specific requirement is to have links to share with others, say while troubleshooting an issue, which can then be used even after weeks to come back to the same exact search result. So creating a report doesnt make sense in this case. To have something available weeks after is exactly what report is for.  You don't want to force people to create their own bookmark for such purposes.  And there is absolutely no need to instruct future users to enter them manually. (In fact, the best report is where you disallow time picker.) The point is, you CAN reproduce search results years after if your saved search contains the same time window as your original search.  Have you read the document I linked?  Say, I want people to search the following   index=_internal | timechart span=2h count by sourcetype ``` data emulation 2 ``` | addtotals | delta "Total" as _delta | foreach * Total [eval <<FIELD>> = if(-_delta > Total, null(), '<<FIELD>>')]   for the past 2 days, where "past 2 days" is merely a reference to my search time.  You probably recognize that you don't need any precision in this time period. (I'll demonstrate more precise requirements later.)  So, say I am searching at 1015 Zulu time of 2023-09-13.  It is perhaps sufficient to pass 1000 Zulu time for future users. (Or 1100 as chances may suit.)  I can save the search as   index=_internal earliest=09/11/2023:10:00:00 latest=09/13/2023:10:00:00 | timechart span=2h count by sourcetype ``` data emulation 2 ``` | addtotals | delta "Total" as _delta | foreach * Total [eval <<FIELD>> = if(-_delta > Total, null(), '<<FIELD>>')]   If you want to be more precise, you can always specify time with more precision. You can do this by looking at your watch, or you can get it from Splunk.  For example, I want   index=_internal | stats count by sourcetype   for a certain period that I am searching for.  I can do   index=_internal | stats count by sourcetype | addinfo | fields - info_s*   This gives me sourcetype count info_max_time info_min_time dbx_health_metrics 8220 1694583382.000 1694579760.000 dbx_server 2 1694583382.000 1694579760.000 splunk_python 76 1694583382.000 1694579760.000 splunk_search_messages 2 1694583382.000 1694579760.000 splunk_web_access 5 1694583382.000 1694579760.000 splunk_web_service 15 1694583382.000 1694579760.000 splunkd 32275 1694583382.000 1694579760.000 splunkd_access 824 1694583382.000 1694579760.000 splunkd_ui_access 619 1694583382.000 1694579760.000 I just put info_min_time and info_max_time back.   index=_internal earliest=1694579760.000 latest=1694583382.000 | stats count by sourcetype     (They happen to be the past 4 hours.)  As I said, if I want to know what happened in the past four hours tonight, this search will always give me the same output whether I do it tomorrow or a year after.  And I never have to write a memo to myself about when I did this search, nor do I need to use time selector again.
You could use previous methods to contact splunk and tell that you are willing to give your enhancements to TA-aws if they are interested about those. I suppose that PM team are at least interested t... See more...
You could use previous methods to contact splunk and tell that you are willing to give your enhancements to TA-aws if they are interested about those. I suppose that PM team are at least interested to discuss with you what you have and can they use those or not.
The Add-on docs provide docs how the single IIS should be configured so that it logs the proper data. How to deploy that configuration in your environment is something you have to consult with your ... See more...
The Add-on docs provide docs how the single IIS should be configured so that it logs the proper data. How to deploy that configuration in your environment is something you have to consult with your admins and check with your local policies. We can't tell you if in your case GPO will be the appropriate solution. It might be (I'm not sure if you can configure those settings with GPO) but there can be other ways to do it (for example if you used any third party automation solution you could use that instead of deploying settings via GPO. The reqiurements for the Add-on regarding IIS configuration are described here - https://docs.splunk.com/Documentation/AddOns/released/MSIIS/Hardwareandsoftwarerequirements#Microsoft_IIS_setup_requirements but how to apply them properly is up to you and your infrastructure team.
Hi All,  Any luck on this issue, I am facing similar issue with process monitoring extension. Error log: [Monitor-Task-Thread1] 13 Sep. 2023 10:57:56,299 DEBUG WindowsParser-Process Monitor - Unabl... See more...
Hi All,  Any luck on this issue, I am facing similar issue with process monitoring extension. Error log: [Monitor-Task-Thread1] 13 Sep. 2023 10:57:56,299 DEBUG WindowsParser-Process Monitor - Unable to retrieve process info for pid 5248 org.hyperic.sigar.SigarPermissionDeniedException: Access is denied. -Pavan
I think there is a way to increase the default expiry times. But this involves a cost that is the jobs occupies a space in users disk quota. So its probably not a good idea to increase these to huge ... See more...
I think there is a way to increase the default expiry times. But this involves a cost that is the jobs occupies a space in users disk quota. So its probably not a good idea to increase these to huge values both from a user and machine resource perspective.
Our specific requirement is to have links to share with others, say while troubleshooting an issue, which can then be used even after weeks to come back to the same exact search result. So creating a... See more...
Our specific requirement is to have links to share with others, say while troubleshooting an issue, which can then be used even after weeks to come back to the same exact search result. So creating a report doesnt make sense in this case.   Also, after running searches using UI, its hard to add 'earliest and latest'  manually every time you want to share it with the correct formatting of time. Most users dont know this or wouldnt find it easy I think.