All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,   I have trained a FieldSelector model and I need to inspect the findings with the summary function/ However, I am receiving the following error:   Can you please help? Many tha... See more...
Hi,   I have trained a FieldSelector model and I need to inspect the findings with the summary function/ However, I am receiving the following error:   Can you please help? Many thanks, Patrick
I have created a field transformatie via the gui of splunk. I want to add a field in this transformation. If I open the field transformation (settings-fields-field transformation) the already existi... See more...
I have created a field transformatie via the gui of splunk. I want to add a field in this transformation. If I open the field transformation (settings-fields-field transformation) the already existing fields are not visible. Is it possible to change the existing fields via the gui?
hi everyone,  Could you guys please help me with the below queries? how to delete macro from the cli ? ( if the macro permission is private ) how to delete macro from the cli ? ( if the macro p... See more...
hi everyone,  Could you guys please help me with the below queries? how to delete macro from the cli ? ( if the macro permission is private ) how to delete macro from the cli ? ( if the macro permission is this app only)
I want to specify a field that contains time as earliest and another field as latest so that my spl will be executed with the earliest value of the earliest value of fileld1 and latest value as the l... See more...
I want to specify a field that contains time as earliest and another field as latest so that my spl will be executed with the earliest value of the earliest value of fileld1 and latest value as the latest value of the filed 2. Example, index=abcd  |table starttimeUTC endtimeutc in the above search should run as earliest=<earlier value of tarttimeUTC> and latest=<latest value of endtimeutc>
Hi, I am looking to plot a graph using four fields in splunk. Looking for relationship  graph among Domain, Category , Ipaddress and Severity similar to excel graph as below. Sample Data: Domai... See more...
Hi, I am looking to plot a graph using four fields in splunk. Looking for relationship  graph among Domain, Category , Ipaddress and Severity similar to excel graph as below. Sample Data: Domain Category Ipaddress Severity domain1 prod 192.168.1.20 Low domain2 non-prod 192.168.1.21 High domain3 prod 192.168.1.22 Critical domain3 prod 192.168.1.22 Medium domain4 non-prod 192.168.1.23 Low domain1 prod 192.168.1.20 Low domain2 non-prod 192.168.1.21 High domain3 prod 192.168.1.22 Critical domain3 prod 192.168.1.22 Medium domain4 non-prod 192.168.1.23 Low domain1 prod 192.168.1.20 Low domain2 non-prod 192.168.1.21 High domain3 prod 192.168.1.22 Critical domain3 prod 192.168.1.22 Medium domain1 prod 192.168.1.20 High domain1 prod 192.168.1.20 Critical   Graph prepared using excel:     Please advise search command to see the relationship in Visualization to plot the graph.  
Hi, I am in the feature selection stage of my ML assignment. The data I am working with is as follows: index=nwstats sourcetype="traffic:delta" I need to find the 3 "best" features to use befo... See more...
Hi, I am in the feature selection stage of my ML assignment. The data I am working with is as follows: index=nwstats sourcetype="traffic:delta" I need to find the 3 "best" features to use before I test different ML models on the data. To do this, I am trying to use the FieldSelector in MLTK and then see te results with the summary.   As you can see, I am getting an error ..... can you please help? Many thanks, Patrick
Good day, I am managing an infrastructure that currently has both sas-2 and sas-3 hard drives mixed in with the OS and Data partitions on the indexers. I was curious if this would have an impact acr... See more...
Good day, I am managing an infrastructure that currently has both sas-2 and sas-3 hard drives mixed in with the OS and Data partitions on the indexers. I was curious if this would have an impact across all of the other indexers since sas-2 operates at 6gbps vs sas-3 that operates at 12-gbps. If I remember correctly indexers utilize the member with the lowest CPU and memory. Would this happen for sas speeds too?
The new DBv3 doesn't like inline comments (--) in the SQL query you must update the query to user  multi-line comments to save the queries.  Use This:    /* your SQL STATEMENT COMMENT */   Not T... See more...
The new DBv3 doesn't like inline comments (--) in the SQL query you must update the query to user  multi-line comments to save the queries.  Use This:    /* your SQL STATEMENT COMMENT */   Not This:     --your SQL comment here   Once the comment have been cleaned up you should be able to run your queries again.  Note: if you're using queries configured with rising columns watch the below video on YouTube by the Splunk team. https://www.youtube.com/watch?v=oPB2Lpd9ZAs good luck! 
Hello, so I have an input on my dashboard page of either month"01-2022,02-2022" and also quarter"Q1-2022". So depending on the search I want to have my timechart command. For example: query| time... See more...
Hello, so I have an input on my dashboard page of either month"01-2022,02-2022" and also quarter"Q1-2022". So depending on the search I want to have my timechart command. For example: query| timechart span="1mon" count(number) [For month] query| timechart span="qtr" count(number) [For quarter]. I want query like this: if [input_month="Q%"] then query| timechart span="qtr" count(number) else query| timechart span="1mon" count(number)   How can I do this  ?
Gentlemen, We are on Splunk Cloud. In my raw events coming from AWS , splunk by default shows a field called "category" under "Interesting fields" . However, it's value ( as in it's extraction)  ... See more...
Gentlemen, We are on Splunk Cloud. In my raw events coming from AWS , splunk by default shows a field called "category" under "Interesting fields" . However, it's value ( as in it's extraction)  isn't what we are expecting it to be. It only manages to extract a part of the complete string.   For example:   The raw events have category as follows (In JSON format)  "Policy:IAMUser/RootCredentialUsage"   (without quotes) But Splunk is instead showing the value of category as:  Policy   .Now,  whats happening is if i use the IFX or rex command to create a field extraction  keeping the same name for my field i.e.  category and value: Policy:IAMUser/RootCredentialUsage   ,  my newly extracted field keeps getting overwritten with the default old values again . I am assuming this is because  the names of the fields are same  ( category) , so splunk takes its own precedence.  IS this the case of Index time vs Search Time field extraction conflict ? How to make Splunk use whatever value my field extraction ( as in rex or IFX) is extracting for category and at the same time also retain its name as is ?   Dont want the category field to display its old indexed value.
Hi, I need to use Linear Regression to predict network volumes at the moment. The index I am using has a number of categorical data that I wish to change to dummy variables. I am using the Fi... See more...
Hi, I need to use Linear Regression to predict network volumes at the moment. The index I am using has a number of categorical data that I wish to change to dummy variables. I am using the FieldSelector functionality and i am getting the following error:   Can you please help? Thanks, Patrick
Hello Splunkers,   I have the following raw event.It was parsing with correct date and time until the daylight saving started but after march 13th(daylight saving started) I see one hour mismatch... See more...
Hello Splunkers,   I have the following raw event.It was parsing with correct date and time until the daylight saving started but after march 13th(daylight saving started) I see one hour mismatch..what changes should I make on props.conf to show the correct time?   3/13/22 11:59:59.989 PM   2022-03-13 22:59:59,989 |v144031v~212657|*** conn[SSL/TLS]=103 CLIENT(1.1.2.2:23) disconnected. Thanks in Advance
Hi, For the standard "predict" function in Splunk, what are the options to access the ACCURACY of the predictions?  Thanks, Patrick
Hello, I have configured a custom indexed field via transforms.conf and props.conf as following: transforms.conf:  (/apps/search/local/) [EventID] FORMAT = EventID::$1 REGEX = <regex expressi... See more...
Hello, I have configured a custom indexed field via transforms.conf and props.conf as following: transforms.conf:  (/apps/search/local/) [EventID] FORMAT = EventID::$1 REGEX = <regex expression> WRITE_META = true   props.conf: (/apps/search/local)   [<sourcetype>] DATETIME_CONFIG =  NO_BINARY_CHECK = true category = custom pulldown_type = 1 LINE_BREAKER = ([\r\n]+) TRANSFORMS-EventID = EventID   fields.conf (etc/system/local) [sourcetype::<sourcetype>::EventID] INDEXED = True   The field EventID is getting indexed, I have checked it via   | walklex index="<index-name>" type=field | search NOT field=" *" | stats values(field)   The field will also show up at the sidebar when searching in smart mode, but not when searching in fast mode. Is there any way to make it show up in fast mode too? I assumed this woulde have been done by the fields.conf Stanza, but it seems not to work for me.  
Hi Splunk Community, I have 2 tables I am attempting to merge together. Both tables are in csvs that I am trying to pull from. Does anyone know the command so that the data from the second table ge... See more...
Hi Splunk Community, I have 2 tables I am attempting to merge together. Both tables are in csvs that I am trying to pull from. Does anyone know the command so that the data from the second table gets added to the bottom of the first? table 1                                           table 2 a1                                                    d4  b1                                                    e5 c3                                                     f6 Combined a1 b2 c3 d4 e5 f6                                    
Hi, I have a dashboard where I have a panel and ONLY if the user clicks on a row of this panel, does another panel pops up on the same same dashboard. The field that connects both panels is "SESSI... See more...
Hi, I have a dashboard where I have a panel and ONLY if the user clicks on a row of this panel, does another panel pops up on the same same dashboard. The field that connects both panels is "SESSION_UUID". The drilldown feature is not currently working though. Here is my XML code: <form theme="dark" script="tokenlist.js"> <row> <panel> <table> <search> <query> index=fraud_glassbox sourcetype="gb:sessions" | table SESSION_UUID Global_MCMID_CSH SESSION_TIMESTAMP COUNTRY CITY CLIENT_IP Global_EmailID_CSH </query> </search> <option name="drilldown">cell</option> <drilldown> <set token="tokComponent">$row.component$</set> </drilldown> </table> </panel> </row> <row depends="$tokComponent$"> <panel> <table> <search> <query> index=fraud_glassbox sourcetype="gb:hit" component="$tokComponent$ | table HEADER_REQUEST_REFERER,URL_PATH, SESSION_TIMESTAMP, username,CLIENT_IP, PACKET_IP </query> </search> </table> </panel> </row> </form> Can you please help as at the moment I am receiving the following error and the 2nd dashboard should not be appearing anyway unless the user clicks on a row in the 1st panel:   Thanks, Patrick
Hello Community! We have a particular set of searches that rely on a lookup against a managed lookup (adhock).  The lookup is 2 columns, Username and Status.  Currently, we update this list manual... See more...
Hello Community! We have a particular set of searches that rely on a lookup against a managed lookup (adhock).  The lookup is 2 columns, Username and Status.  Currently, we update this list manually every day by going in to content management, searching for the file, and then adding and deleting entries. This was ok to start, but now the list is getting unmanageable. What we would like to do, ideally, is take a local CSV and upload it over top of the one that exists via a PoweShell script that will be run on a local machine.  If that is not an option, I would be willing to have a script that creates a search to update the managed lookup that can be copied and pasted into a search. looking for suggestions and ideas.  Thanks in advance.      
We have a distributed search environment, with 2 very old indexers (the original servers) and 3 new indexers in a cluster.  The old indexers have been removed from the destination lists in outputs.... See more...
We have a distributed search environment, with 2 very old indexers (the original servers) and 3 new indexers in a cluster.  The old indexers have been removed from the destination lists in outputs.conf nearly everywhere, and most of the data is between 5 and 6 months old, except for internal indexes. I can't find what my next steps are to prep these servers for retirement, such as force-freezing the buckets they still hold, etc.  Suggestions? Thanks.
There are a lot of security alerts for "Powershell DownloadString" for Chocolatey installer. Is there a way to whitelist that alert keeping in mind that there was a recent attack - "Serpent Backdoor ... See more...
There are a lot of security alerts for "Powershell DownloadString" for Chocolatey installer. Is there a way to whitelist that alert keeping in mind that there was a recent attack - "Serpent Backdoor Slithers into Orgs Using Chocolatey Installer".  Ref links: https://threatpost.com/serpent-backdoor-chocolatey-installer/179027/ https://www.bleepingcomputer.com/news/security/serpent-malware-campaign-abuses-chocolatey-windows-package-manager/ 
Hi Team, Because the data storage time of Splunk is limited, we have a scheduled task to export data from Splunk to AWS S3 through Splunk SDK. SDK output mode: JSON SPL:     search index=... See more...
Hi Team, Because the data storage time of Splunk is limited, we have a scheduled task to export data from Splunk to AWS S3 through Splunk SDK. SDK output mode: JSON SPL:     search index=dput | fields - _raw date_* _cd _kv _bkt _si splunk_server punct timeendpos exectime index lang | table *     But recently I encountered a problem. When I batch query data within 10 minutes (about 400000 logs), I found that some logs will lose some fields, such as: raw data:      "2022-03-01T20:47:04.435Z [XNIO-1 task-16] INFO c.m.assertservice.service.impl.NotebookServiceImpl env=\"PROD\" hostname=\"\" client_ip=\"\" service_name=\"assetservice\" service_version=\"release-1.12.0\" request_id=\"98ad59ad-e973-4258-b559-a5c82476f14d\" event_type=\"read\" event_status=\"success\" event_severity=\"low\" notebook_topics=\"[Manager Research]\" object_type=\"Notebook\" object_id=\"6bcb4ad5-596c-4738-90b9-4bdff9515f12\" component=\"\" event_id=\"98ad59ad-e973-4258-b559-a5c82476f14d\" application=\"\" user_id=\"\" notebook_title=\"Portfolio Manager Performance History\" action=\"GET\" details=\"Get a notebook,title:Portfolio Manager Performance History, type:[LIBRARY]\" eventtype=\"usage\" timestamp=\"2022-03-01T20:47:04.435348Z\" application_area=\"NONE\" event_description=\"Get Notebook By Id UsageTracking\""      search result:     { "_indextime": "1646167627", "_sourcetype": "dput_usage", "_subsecond": ".435", "_time": "2022-03-01T14:47:04.435-06:00", "action": "GET", "application": "", "application_area": "NONE", "component": "", "details": "Get a notebook,title:Portfolio Manager Performance History, type:[LIBRARY]", "env": "PROD", "event_id": "98ad59ad-e973-4258-b559-a5c82476f14d", "event_length": "899", "event_status": "success", "eventtype": "usage", "extracted_sourcetype": "dput_usage", "host": "", "hostname": "", "linecount": "1", "object_id": "6bcb4ad5-596c-4738-90b9-4bdff9515f12", "object_type": "Notebook", "source": "", "sourcetype": "dput_usage", "timestamp": "2022-03-01T20:47:04.435348Z", "timestartpos": "0", "user_id": "" }     You can see that the fields owned by raw data such as notebook_title, notebook_topics do not appear in the search result.  (I also seem to have this problem exporting JSON on the Web UI.) This happens when I query a lot of data at the same time. But when I go to query this log alone and return it through the SDK, this problem does not occur, it returns all the results:     { "_indextime": "1646167627", "_sourcetype": "dput_usage", "_subsecond": ".435", "_time": "2022-03-01T14:47:04.435-06:00", "action": "GET", "application": "", "application_area": "NONE", "client_ip": "", "component": "", "details": "Get a notebook,title:Portfolio Manager Performance History, type:[LIBRARY]", "env": "PROD", "event_description": "Get Notebook By Id UsageTracking", "event_id": "98ad59ad-e973-4258-b559-a5c82476f14d", "event_length": "899", "event_severity": "low", "event_status": "success", "event_type": "read", "eventtype": "usage", "extracted_sourcetype": "dput_usage", "host": "", "hostname": "", "linecount": "1", "notebook_title": "Portfolio Manager Performance History", "notebook_topics": "[Manager Research]", "object_id": "6bcb4ad5-596c-4738-90b9-4bdff9515f12", "object_type": "Notebook", "request_id": "98ad59ad-e973-4258-b559-a5c82476f14d", "service_name": "assetservice", "service_version": "release-1.12.0", "source": "", "sourcetype": "dput_usage", "timestamp": "2022-03-01T20:47:04.435348Z", "timestartpos": "0", "user_id": "" }     The Java SDK version I am using is 1.8.0 and the C# SDK is 2.2.9 Can anyone answer my doubts? Thanks a lot!