All Topics

Top

All Topics

Good day, Is there a way to join all my rows into one? My simple query    index=collect_identities sourcetype=ldap:query user | dedup email | table email extensionAttribute10 extensionAttribu... See more...
Good day, Is there a way to join all my rows into one? My simple query    index=collect_identities sourcetype=ldap:query user | dedup email | table email extensionAttribute10 extensionAttribute11 first last identity     Shows results as, as I have more than one email email extensionAttribute10 extensionAttribute11 first last identity user@domain.com   user@consultant.com User Surname USurname userT1@domain.com user@domain.com user@domain.com User Surname USurname userT0@domain.com user@domain.com user@domain.com User Surname USurname I want to add a primary key that searches for "user@domain.com" and display all their email addresses that they have in one row.  Example email extensionAttribute10 extensionAttribute11 first last identity email2 email3 user@domain.com user@domain.com user@consultant.com  User Surname USurname userT1@domain.com userT0@domain.com
Hello all, I configured an app and in the asset conf, I added an environment variable "https_proxy", but somehow I see that the action still go out via the proxy, but tries to go directly to the des... See more...
Hello all, I configured an app and in the asset conf, I added an environment variable "https_proxy", but somehow I see that the action still go out via the proxy, but tries to go directly to the destination address, I opened the app code to see the referring to this variables, but I couldn't find it. Can anyone shed light and explain how can I check the referring to those variables? in other apps I manage to use the proxy variable successfully, it only happens to me with AD LDAP app 
Hi There, I have a cluster on MongoDB Atlas that contains my data connected to my application. That cluster produces some logs that can be downloaded in .log format or .gz (compressed format). To que... See more...
Hi There, I have a cluster on MongoDB Atlas that contains my data connected to my application. That cluster produces some logs that can be downloaded in .log format or .gz (compressed format). To query and view my logs easily, I would like to use Splunk  Is there any way to ingest those logs from MongoDB Atlas and into a Splunk instance via API?  If there is any,  could anyone kindly share any documentation or process on how to accomplish this? NB: I can obtain the logs from MongoDB Atlas via a cURL request  
Which Forwarder agent version includes the fix for the OpenSSL 1.0.2 < 1.0.2zk vulnerability? If there is no fix for this yet, when can we expect one, or which forwarder version will include the fix... See more...
Which Forwarder agent version includes the fix for the OpenSSL 1.0.2 < 1.0.2zk vulnerability? If there is no fix for this yet, when can we expect one, or which forwarder version will include the fix to remediate this vulnerability? OpenSSL SEoL (1.0.2.x) OpenSSL 1.0.2 < 1.0.2zk Vulnerability
Hello, we meet issue as unix and linux add-on is incompatible with rhel 9.4 ( cause of scripted input). Does Splunk PCI Compliance Add-on used are rhel 9.4 and above compatibles ? regards  
Hello Splunkers I have a requirement to run an alert on second Tuesday of each month at 5:30am. I came up with    30 05 8-14 * 2   However, Splunk tends to run it every Tuesday regardl... See more...
Hello Splunkers I have a requirement to run an alert on second Tuesday of each month at 5:30am. I came up with    30 05 8-14 * 2   However, Splunk tends to run it every Tuesday regardless of the date being between 8th to 14th.  Is this a shortcoming in Splunk or I'm doing something wrong?
Hi Team, We are trying to extract JSON data with custom sourcetype and With the current configuration, all JSON objects are being combined into a single event in Splunk. Ideally, each JSON object ... See more...
Hi Team, We are trying to extract JSON data with custom sourcetype and With the current configuration, all JSON objects are being combined into a single event in Splunk. Ideally, each JSON object should be recognized as a separate event, but the configuration is not breaking them apart as expected   I observed that each JSON object has a comma after the closing brace }, which appears to be causing the issue by preventing Splunk from treating each JSON object as a separate event. sample data :  { "timestamp":"1727962122", "phonenumber": "0000000" "appname": "cisco" }, { "timestamp":"1727962123", "phonenumber": "0000000" "appname": "windows" },  Error message : Error message : JSON StreamID:0 had parsing error: Unexpected character while looking for value comma ',' Thanks in advance
splunkで以下のSPLをジョブのバックグラウンドに送りました。 | metadata type=sourcetypes | search totalCount > 0 その後、こちらのサーチのジョブを削除したのですが、splunkのサーチ画面を更新(F5)すると再度先ほどのジョブが実行されています。 こちらのジョブを完全に削除するにはどうしたらいいですか?何度もジョブ実行されてしまいます。
We  found Visdom for Citrix VDI listing on splunkbase interesting, but not seeing how to download the app to review.   Is this app still availble and supported by the developer(s)? 
I haven't upgraded UF in a while, and I'm having some trouble figuring out how I should proceed with bringing it up to date.  I see that the current version has changed the user from splunk to splunk... See more...
I haven't upgraded UF in a while, and I'm having some trouble figuring out how I should proceed with bringing it up to date.  I see that the current version has changed the user from splunk to splunkfwd.  I also see that updating an existing UF keeps the user as splunk (this seems to work but not always).  This will means that new installations will use a different username than updated UF. This is a problem for me because I use scripts to make the permission changes to give splunk access to the appropriate log files.  I'm not finding a lot of guidance on how to keep this sane.  How have other organizations dealt with this? I'm tempted to uninstall UF and do a fresh install on every system.  That will force me to manage splunk servers differently than other linux servers, but that has to be less complicated than trying to keep track of which systems use splunk and which use splunkfwd.
Hi All,  I am having issues with DB connect version that I downloaded is having issues with sending data
Hi I am kinda stuck and need help. I am creating a chart in the splunk dashboard and for the y axis I have nearly 20 values which are to be shown as legends. After a certain number of values they ar... See more...
Hi I am kinda stuck and need help. I am creating a chart in the splunk dashboard and for the y axis I have nearly 20 values which are to be shown as legends. After a certain number of values they are grouped as "other" which dont want and need to display as separate ones. Also I am also ready to turn off the legend. The query used is  index = "xyz" |rex field=group "<Instance>(?<instance>[^<]+)</Instance>" |rex field=group "<SESSIONS>(?<sessions>\d+)</SESSIONS>" | chart values(sessions) BY _time, instance May I know which option in the chart will not collapse the values of the y axis?
I am a grad student and I recently gave a quiz on splunk. There was a true/false question. Q: Splunk Alerts can be created to monitor machine data in real-time, alerting of an event as soon as it lo... See more...
I am a grad student and I recently gave a quiz on splunk. There was a true/false question. Q: Splunk Alerts can be created to monitor machine data in real-time, alerting of an event as soon as it logged by the host.  I marked it as false because it should be "as soon as the event gets indexed by Splunk" instead of "as soon as the event gets logged by the host".  I have raised a question because I was not awarded marks for this question. But the counter was "Per-result triggering helps to achieve this". But isn't it basic that Splunk can only read the indexed data? Can anyone please verify if I'm correct?  Thanks in advance.
Hi, our company does not yet have Splunk enterprise security, but we are considering getting it. Currently, our security posture includes a stream of EDR data from Carbon Black containing the EDR eve... See more...
Hi, our company does not yet have Splunk enterprise security, but we are considering getting it. Currently, our security posture includes a stream of EDR data from Carbon Black containing the EDR events and watchlist hits. We want to correlate the watchlist hits to create incidents. Is this something Splunk Enterprise Security can do right out of the box, given access to the EDR data? If so, how can do we do this in the Splunk Enterprise Security dashboard?  
Hello everyone, I am a programmer at Terus. After being promoted, I am managing a small group of programmers. Terus will soon receive an order from a domestic enterprise and my boss wants to assign i... See more...
Hello everyone, I am a programmer at Terus. After being promoted, I am managing a small group of programmers. Terus will soon receive an order from a domestic enterprise and my boss wants to assign it to me and my new team. The order's requirements are quite simple but require the use of some features from Splunk. I already have basic knowledge when using Splunk, but after surveying, the young people (5 people) in my group do not know anything about Splunk. Currently, I will have 2 months to train them to prepare for the project. But I tried to teach them in the first week but it seems not very feasible. When searching on the document, I accidentally found the community and a few people are admins and engineers of Splunk. Today, I want to ask everyone about the best way to learn for new people so that in the next month they can be confident to do the project. Note: These members are quite smart and agile but it seems that my communication is not very good so I need help from everyone. Hope to receive everyone's sharing.
I have XML input logs in Splunk. I have already extracted the required fields, totaling 10 fields. I need to ensure any other fields that are extracted are ignored and not indexed in Splunk. Can I... See more...
I have XML input logs in Splunk. I have already extracted the required fields, totaling 10 fields. I need to ensure any other fields that are extracted are ignored and not indexed in Splunk. Can I set it so that if a field is not in the extracted list, it is automatically ignored? Is this possible? 
Hi everyone, I have started working in Splunk UBA recently, and have some questions: Anomalies: How long does it take to identify anomalies after receiving the logs usually? Can I define anomaly... See more...
Hi everyone, I have started working in Splunk UBA recently, and have some questions: Anomalies: How long does it take to identify anomalies after receiving the logs usually? Can I define anomaly rules? Is there anywhere to explain the existing anomaly categories are based on what or will be looking for what in the traffic? Threats: How long does it take to trigger threats after identifying anomalies? Is there any source I can rely on for creating threat rules? As I am creating rules and testing but with no results.
I'm using a query which returns entire day data :       index="index_name" source="source_name"        And this search provides me above 10 millions of huge events. So my requirement is if t... See more...
I'm using a query which returns entire day data :       index="index_name" source="source_name"        And this search provides me above 10 millions of huge events. So my requirement is if the data gets reduced below 10m i should receive an alert. But when this alert is triggering then this entire search is not getting completed because it's taking lots of time and before that only the alert triggering every time. So is there any way that i can trigger this alert after the search completed completely.
Can you please assist me in deleting and wiping my account?
I've imported a csv file and one of the fields called "Tags" looks like this: Tags= "avd:vm, dept:support services, cm-resource-parent:/subscriptions/e9674c3a-f9f8-85cc-b457-94cf0fbd9715/resourcegr... See more...
I've imported a csv file and one of the fields called "Tags" looks like this: Tags= "avd:vm, dept:support services, cm-resource-parent:/subscriptions/e9674c3a-f9f8-85cc-b457-94cf0fbd9715/resourcegroups/avd-standard-pool-rg/providers/microsoft.desktopvirtualization/hostpools/avd_standard_pool_1, manager:JohnDoe@email.com" I'd like to split each of these tags up into their own field/value, AND extract the first part of the tag as the field name. Result of new fields/values would look like this: avd="vm" dept="support services" cm-resource-parent="/subscriptions/e9674c3a-f9f8-85cc-b457-94cf0fbd9715/resourcegroups/avd-standard-pool-rg/providers/microsoft.desktopvirtualization/hostpools/avd_standard_pool_1" manager="JohnDoe@email.com" I've looked at a lot of examples with rex, MV commands, etc, but nothing that pulls the new field name out of the original field. The format of that Tags field is always the same as listed above, for all events. Thank you!