All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all, I am new to world of Splunk I have seen several presentations where building dashboards with React and Splunk cloud is discussed. I was trying to find documentation on how, with no major... See more...
Hello all, I am new to world of Splunk I have seen several presentations where building dashboards with React and Splunk cloud is discussed. I was trying to find documentation on how, with no major luck so far. In a nutshell I am looking for doco which would tell me, 1. Required npm packages (@splunk/react-ui, @splunk/dashboard-context) 2. How to authenticate with my Splunk cloud account (client, tenant, redirect url..) 3. How to configure the core components (configs, props, ..) Any Pointer towards correct direction would be much appreciated
Hello im using splunk image with Docker and Kubernetese. i want to create users automatically each time im creating new environment. is there a way to run kube job that runs cli commands? or mayb... See more...
Hello im using splunk image with Docker and Kubernetese. i want to create users automatically each time im creating new environment. is there a way to run kube job that runs cli commands? or maybe there is another way to achieve my goal ?    thanks
I am currently running a search which provides Name of host which are unregistered at a particular time and then after 1 hour run same search to fetch unregistered host Then I manually compare 2 rep... See more...
I am currently running a search which provides Name of host which are unregistered at a particular time and then after 1 hour run same search to fetch unregistered host Then I manually compare 2 reports for name of host were unregistered in first report and now changed to registered in next report.  Based on the data compared for unregistered an registered host I take necessary action on unregistered host..  My question--is there a way we can compare the data for 1 hour in 1 single search?? Query currently use is - - | input lookup R2VirtualDedktopAssignments | where Environment=PROD And Registrations tate =Unregistered
Hi all, I am receiving Windows event logs from a domain controller via an NXLogs agent. This data is being sent over UDP/514 and the data format is in BSD style syslog. Whilst I am successfully rec... See more...
Hi all, I am receiving Windows event logs from a domain controller via an NXLogs agent. This data is being sent over UDP/514 and the data format is in BSD style syslog. Whilst I am successfully receiving and ingesting this data the problem I have is as follows How do I have splunk successfully parse this data so that it can be used by the Windows TA addon I am thinking I need to create something in the props.conf maybe? Any questions please ask Regards TheTech
Hi all, If I manage two separate Splunk cloud accounts and have an UF agent configured to talk to the first, how can I reconfigure it to talks second please? Second part of the question, Is it poss... See more...
Hi all, If I manage two separate Splunk cloud accounts and have an UF agent configured to talk to the first, how can I reconfigure it to talks second please? Second part of the question, Is it possible for a single agent to send data to both accounts at the same time without having to install a second agent? Thank you verymuch in advance for your help     
I am monitoring a CSV file and creating a dashboard based on it, the file is modified many times a day, or not for many days at all. The file has not just rows added to it but also removed, plus the... See more...
I am monitoring a CSV file and creating a dashboard based on it, the file is modified many times a day, or not for many days at all. The file has not just rows added to it but also removed, plus the file contents are edited which will cause Splunk to re-index the whole file. I can't use latest command because latest will return those lines which were deleted by the user as well and I can't set a range because I do not know when was the file last changed.  Due to the above issues my dashboard will always be inconsistent  The only way I can think of solving this problem is  1. If I can somehow get all old events of the files removed from Splunk and then index the file, then run queries will "All Time" 2. Write a script that reads the csv, append a time date-time-field, re-creates a new csv which is monitored. Do this every 15 minutes causing the Splunk monitor to re-index the whole file every 15 minutes. (But this causes another issue, if the user updates the dashboard they can get partial or extra data depending on when the open the dashboard relative to 15 minutes) Any ideas how this problem can be solved in a more elegant manner ?
Hello, Gravity inline documentation mentions the Load Balancing has to be handled outside of their cluster (from a DNS ?). But they are not explicit on the need for a virtual ip address to be set be... See more...
Hello, Gravity inline documentation mentions the Load Balancing has to be handled outside of their cluster (from a DNS ?). But they are not explicit on the need for a virtual ip address to be set before the installation. What are the network prerequisites for a DSP installation in a cluster ? Do we need a DNS load balancer ? A VIP set before the installation ? Can we rely on the first node (master/controlplane) IP address, then switch to another ? The low latency network mentioned as good practice between gravity nodes communication in DSP deployment procedure: do we have additional info about its' requirements ? Then another question : Can I start with a DSP standalone deployment, and make it a cluster later on ? Obviously the standalone would be for testing/demoing while the hacluster would be for production, but this can be usefull for testing as well
Hi All, I have one field Rundatetime which is in below format: 10/25/2020 3:57 10/16/2020 5:22 I just want to extract Date from it as below: 10/25/2020 10/16/2020 How can I do that. Can someo... See more...
Hi All, I have one field Rundatetime which is in below format: 10/25/2020 3:57 10/16/2020 5:22 I just want to extract Date from it as below: 10/25/2020 10/16/2020 How can I do that. Can someone guide me My current query is this: | inputlookup mnr_rally_defects2.csv| table Rundatetime 
Every time I try to download Splunk Enterprise 8.1.1, Splunk responds with "Oops. we see you're already logged in." and returns me to the download page. I'm using the most current version of Windows ... See more...
Every time I try to download Splunk Enterprise 8.1.1, Splunk responds with "Oops. we see you're already logged in." and returns me to the download page. I'm using the most current version of Windows 10 Pro and Google Chrome. Does anyone know how to get past this and get to the actual download? Thanks in advance.
i want to extract this below event from the _raw event for all the entries in query. Can you please help me on this.  None, 'Users': [{'Id': '10'}]   Thanks in Advance
Hi All, Is it possible to use smartstore with on-prem Splunk deployment if the smartstore will be implemented on AWS S3? It though it is possible as long as using high speed network service from AW... See more...
Hi All, Is it possible to use smartstore with on-prem Splunk deployment if the smartstore will be implemented on AWS S3? It though it is possible as long as using high speed network service from AWS such as Direct Connect something like that..  I saw the doc below and seems that it is not possible. https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/SmartStoresystemrequirements Could you please let me know if it is possible? If S3 is not able to be used smartstore with on-prem Splunk deployment, what is the option? Your help would be appreciated. Thanks.
We need to generate report in splunk which can tell us my log sources and all server details are reporting to splunk.     is there any query we can use to fetch all the details.
I have completed the guide regarding Rubrik quick guide installation.  https://github.com/rubrikinc/rubrik-addon-for-splunk/blob/master/docs/quick-start.md I am getting data to my index in Splunk c... See more...
I have completed the guide regarding Rubrik quick guide installation.  https://github.com/rubrikinc/rubrik-addon-for-splunk/blob/master/docs/quick-start.md I am getting data to my index in Splunk cloud, however, I am not able to get the Rubrik app to populate with data.  Cluster Name field is greyed out and there is an error "Could not create search".  I've created my Datasets per the documentation and there are no troubleshooting steps regarding Dashboard issues.  Can someone provide some guidance, I'm almost there.  Thanks!
Hello, I have downloaded the RWI Executive Dashboard App, The RWI Executive add-on, and the MS Teams Add-on.  When I go an set up the RWI Executive Dashboard app, I get a blank screen after I press... See more...
Hello, I have downloaded the RWI Executive Dashboard App, The RWI Executive add-on, and the MS Teams Add-on.  When I go an set up the RWI Executive Dashboard app, I get a blank screen after I press the Contine to app set up page button. I have Enterprise 8.1, Running on Centos 8 any help would be greatly appreciated.
I have one requirement: I have one column in lookup as CaseCreatedDate. It contain 175,908 data. From 2019 to 2020 Currently its showing data in this format: 2019-01-01T00:25:40.000+0000 2019... See more...
I have one requirement: I have one column in lookup as CaseCreatedDate. It contain 175,908 data. From 2019 to 2020 Currently its showing data in this format: 2019-01-01T00:25:40.000+0000 2019-01-01T10:36:15.000+0000 2019-01-01T10:36:15.000+0000 I need to show Case_Status with it like below: Case_CreatedDate Case_Status 2019-01-01T00:25:40.000+0000 Closed 2019-01-01T10:36:15.000+0000 Resolved-Fulfill 2019-01-01T10:36:15.000+0000 Resolved-Fulfill 2019-01-01T10:36:15.000+0000 Resolved-Fulfill 2019-01-01T10:36:15.000+0000 Pending-UW-Review Is that possible to show Case_CreatedDate in years as the data is too much. Can someone guide me on this. Thanks in advance
I have adding a custom field/value to a log event within splunk @index time. This also includes a DEFAULT_VALUE if the match fails. Here are examples of my config: tranforms.conf    [app_name] ... See more...
I have adding a custom field/value to a log event within splunk @index time. This also includes a DEFAULT_VALUE if the match fails. Here are examples of my config: tranforms.conf    [app_name] REGEX = \"app_name\":\".+?(\w+)-(\w)-.+?\" FORMAT = app::$1$2 DEFAULT_VALUE = app::"not_specified" WRITE_META = true   fields.conf   [app] INDEXED=true   Everything works except the default value is not searchable. Under interesting fields in the splunk UI I can see app -> "not_specified" as a value with an event count, however when I click on it, or add it to a search, 0 results are returned. The non-default values return back ok, and are searchable, just the static default value is not. Any help is much appreciated.
I have an event action I want users to be able to access in a dashboard. Right now I need to include an event panel, and then users need to go through there to access it. Is it possible to call the e... See more...
I have an event action I want users to be able to access in a dashboard. Right now I need to include an event panel, and then users need to go through there to access it. Is it possible to call the event action through a hyperlink?
I get a different result set when using jobs.export of python SDK with a simple stats query compared to the same query (and time range) on the splunk UI.  the jobs.export returns a list of results ... See more...
I get a different result set when using jobs.export of python SDK with a simple stats query compared to the same query (and time range) on the splunk UI.  the jobs.export returns a list of results of the following form, so there is a repeating pattern with several sets of "lastrow":true  and repeating "offset" almost as if partial results are included several times.  Only the last set matches the final results of the query on the UI.  Schematically the results of this call: jobs.export("search my_id | stats count by index").read().decode('utf8').split('\n') have this form ['{"preview":true, "offset":0, "result": {"index": "index_a", "count":"2"}}', '{"preview":true, "offset":1, "result": {"index": "index_b", "count":"4"}}', '{"preview":true, "offset":2, "lastrow":true, "result": {"index": "index_b", "count":"4"}}', '{"preview":true, "offset":0, "result": {"index": "index_a", "count":"6"}}', '{"preview":true, "offset":1, "result": {"index": "index_b", "count":"12"}}', '{"preview":true, "offset":2, "lastrow":true, "result": {"index": "index_b", "count":"50"}}', '{"preview":true, "offset":0, "result": {"index": "index_a", "count":"18"}}', '{"preview":true, "offset":1, "result": {"index": "index_b", "count":"102"}}', '{"preview":true, "offset":2, "lastrow":true, "result": {"index": "index_b", "count":"499"}}', '{"preview":true, "offset":0, "result": {"index": "index_a", "count":"18"}}', '{"preview":true, "offset":1, "result": {"index": "index_b", "count":"102"}}', '{"preview":true, "offset":2, "lastrow":true, "result": {"index": "index_b", "count":"499"}}', ] the last couple of segments ending with "lastrow":true share the same count that matches that in the UI.   Is there some flag we need to insert in kwargs? currently using only earliest_time, latest_time, count:0 and sample_ratio:1
We have a scheduled task running a .bat file. The task successfully copies the file from source server to local folder. Part two of the task invokes our Reflections FTP client software and sends sin... See more...
We have a scheduled task running a .bat file. The task successfully copies the file from source server to local folder. Part two of the task invokes our Reflections FTP client software and sends single file via SFTP to the final destination which is failing. We are seeing the following error messages in our Application Event Logs and the task hangs in the running state. .NET Runtime version 4.0.30319.0 - The profiler has requested that the CLR instance not load the profiler into this process. Profiler CLSID: 'AppDynamics.AgentProfiler'. Process ID (decimal): 363548. Message ID: [0x2516]. Please let me know, how to fix this issue
Hello, I'm with problems at receiving the data from the Jira Issues Collector. The events which includes accent marks are having trouble at the encoding I guess. I already tried changing the char... See more...
Hello, I'm with problems at receiving the data from the Jira Issues Collector. The events which includes accent marks are having trouble at the encoding I guess. I already tried changing the charset and adding a transform.conf file but im still stucked. Im trying now a field transformation using this sed regex: (s/\\u00e9/e/g) Here's the data example.