All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have sourcetype based definition in which I mentioned INDEXED_EXTRACTION=JSON. Under this sourcetype there are 10 sources configured. Out of 10, let us say one is not in JSON format. So how to use ... See more...
I have sourcetype based definition in which I mentioned INDEXED_EXTRACTION=JSON. Under this sourcetype there are 10 sources configured. Out of 10, let us say one is not in JSON format. So how to use same sourcetype but no need to mentioned INDEXED_EXTRACTION=JSON for that particular source alone? I thought of using source:: based extraction in props with other attributes and not mentioning this INDEXED_EXTRACTION attribute. In that case will it be considered from the sourcetype declaration?
| timechart span=1mon count by status | addtotals row=t col=f labelfield=Total True False "Not available" fieldname="Total_Count" | eval percent=round(((True/Total_Count)*100),0) | table Date Percen... See more...
| timechart span=1mon count by status | addtotals row=t col=f labelfield=Total True False "Not available" fieldname="Total_Count" | eval percent=round(((True/Total_Count)*100),0) | table Date Percentage I chose visualization column chart but I get only default color. How can I customize the color base on the range. 0- 50% RED 50-70 AS YELLOW 70-100 AS Green.    <option name="charting.fieldColors">{"<50":"#e600ac","Up to 60":"#ff0000","Up to 80":"#ffa31a","Up to 100":"#33cc33"}</option>   I try modifying but it did not work.  Can anion help  Thanks
Hi Team, We have users logging in multiple devices. So, we need to showcase the count of devices  and user logged in. Can you please advise the query for same.   Regards, Nagalakshmi A
Azure Stack - private regions I wanted to ask/confirm that the Splunk Add On for Microsoft Cloud Services - for the purposes of pulling in Azure related logs - would work with Azure Stack provided t... See more...
Azure Stack - private regions I wanted to ask/confirm that the Splunk Add On for Microsoft Cloud Services - for the purposes of pulling in Azure related logs - would work with Azure Stack provided the API endpoints were edited in the appropriate files per Configure the Splunk Add-on for Microsoft Cloud Services for Azure endpoints for international regions - Splunk Documentation My only concern relates to this post - Azure China  - where there is a hard coding for azure environments. Would I just add a new environment, i.e. AzureStack around line 88 in file "splunk_ta_mscs_rh_azureaccount.py" and add that same environment name (and settings) under "mscs_azure_accounts.conf"?
I'm working on building a dashboard that will take a base report and parse it into different items that can be flagged for review. I've been able to get this to work in a roundabout way, but there is... See more...
I'm working on building a dashboard that will take a base report and parse it into different items that can be flagged for review. I've been able to get this to work in a roundabout way, but there is a component that seems to require that the base search be ran again for each of the 10 panels (meaning 10 searches). I have tried using the weekly-ran report as the primary data source and chaining the further refinement from there - by using |search in the chained searches - but it's still running the entire search again. The biggest problem with this is that this specific search can take upwards of 20 minutes to run successfully, meaning that I have 10 cores locked up for 20 minutes... Not ideal.  A way around this would be to run the scheduled reports of this refined data, which is the next place that I went and would like to go - EXCEPT there is some dynamic data that I'm incorporating into the search. I have a dynamic CSV file that contains usernames of users that should be inside the top-level search query (index=production user IN (user-from-csv,user2-from-csv,etc). I can get this to work in the dashboard by storing the search results as a token (after having used inputlookup and format). I can't get this to work in the report, though. Does anybody know how to take a CSV file's contents and store them in a variable OR run a sub-search and pass those results as a string later in the main search? non-working view of what I would like to see (understanding that this isn't how Splunk works): |eval included-users=inputlookup included-users.csv index=production user IN (included-users) action=success
Hi I am trying to add % to the "by percent" column only.  I can't seem to get it to show. Thanks  
I have a single report that features a list of devices broken up by the group that supports them. I want to have that single report run on a monthly basis - when it runs it should do the following: ... See more...
I have a single report that features a list of devices broken up by the group that supports them. I want to have that single report run on a monthly basis - when it runs it should do the following: Report runs Device list is generated, so one column is Computer_Name, another column is Support_Team Upon generation, emails kick off to email@domain.com I need the body to be dependent on the Support_Team field i.e. all devices under Support_Team = Alpha should reference only Alpha's devices (email 1) All devices under Support_team = Bravo should reference only Bravo's devices (email 2) Is this possible? Or is this a pipe-dream? Today I handle this by having 10 separate reports, 1 per Support_Team field.
I'm trying to integrate TA-DMARC but it's returning the following error, do I need to do some configuration in 0365 too?      Log Erro: No access token found for client ID: dmarc@client.com.br - r... See more...
I'm trying to integrate TA-DMARC but it's returning the following error, do I need to do some configuration in 0365 too?      Log Erro: No access token found for client ID: dmarc@client.com.br - result {'error': 'unauthorized_client', 'error_description': "AADSTS700016: Application with identifier 'dmarc' was not found in the directory 'Corporativo'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant.\r\nTrace ID: xxxxxxx54f9-4e2c-bead-0bf3c5e46600\r\nCorrelation ID: xxxxxxxxxx0-c9c63e069555\r\nTimestamp: 2023-08-23 14:31:40Z", 'error_codes': [700016], 'timestamp': '2023-08-23 14:31:40Z', 'trace_id': 'e7043f38-54f9-4e2c-bead-0bf3c5e46600', 'correlation_id': '3edd76d4-xxx-4ea0-xxx-xxxxxxxxxx', 'error_uri': 'https://login.microsoftonline.com/error?code=700016'}    
Hi, I just started Splunk yesterday and was looking for proper syntax for trying to search for the creation of registry keys on all machines and if possible, how to get alerts for deleted reg keys. I... See more...
Hi, I just started Splunk yesterday and was looking for proper syntax for trying to search for the creation of registry keys on all machines and if possible, how to get alerts for deleted reg keys. I havent touched every dashboard in Splunk but I imagine a simple table that includes all machines in the network with reg key counts would be the way to go. Im not sure about the alerts part for deleted keys
Hello! I'm working on a Rex Expression for my job, and wanted to ask for some assistance in developing it. I'm supposed to make a rex expression to pull out the "Fixed version" of a piece of softw... See more...
Hello! I'm working on a Rex Expression for my job, and wanted to ask for some assistance in developing it. I'm supposed to make a rex expression to pull out the "Fixed version" of a piece of software out of a field called "pluginText". Right now the problem is the Rex expression I've made only works half the time. My Rex expression is currently:  | rex field=pluginText max_match=0 "\s+Fixed version\s+:\s+(?<FixedVersion>.+)"\n Here are some relevant examples of the sorts of data I'm working with: <plugin_output>    Path        : C:\Program Files\VMware\VMware Tools\VMware VGAuth\libssl-3-x64.dll    Reported version : 3.0.3.0    Fixed version : 3.0.4</plugin_output> and <plugin_output>    Path : C:\myPrograms\cygwin64\bin\openssl.exe    Reported version : 1.1.1.4    Fixed version : 1.1.1p   Path : C:\myPrograms\Git\usr\bin\openssl.exe   Reported version : 1.1.1.9   Fixed version : 1.1.1p   Path : C:\myPrograms\Git\mingw64\bin\openssl.exe   Reported version : 1.1.1.9   Fixed version : 1.1.1p </plugin_output> The Rex expression I made works perfectly on the second example I've provided, but not the first. I'm guessing it's due to the </plugin_output> on it. Any advice for how I can tweak it to work for both sorts of data? Attached is a visual aid of the first example, for clarity. Thank you in advance!
Hello all, I need some suggestions/recommendations about enlarging our splunk infra. What are the considerations of putting a Heavy forwarders in a different network/vlan than the deployment server... See more...
Hello all, I need some suggestions/recommendations about enlarging our splunk infra. What are the considerations of putting a Heavy forwarders in a different network/vlan than the deployment server and the indexers ? What are the considerations of putting the indexers on different networks/vlans than the other indexers and the cluster master? will there be any synchronization issues? The context here is that we will onboard new logs from a different network/domain/vlan and we were thinking of putting the heavy forwarder in the "new domain" to reduce the overhead with the firewall rules/to centralize data that will be later sent to the indexers.  and we were also wondering why not putting a heavy forwarder and an indexer in the "new domain" and open the required flows towards/from the deployment server/cluster master/other indexers search heads. Any comments or suggestions ?   Thank you soo much.
I am trying to do something in a rather complex search, but I believe I can map it down to the following. I would like to use variable expansion or other (preferably simple) magic to recreate this ... See more...
I am trying to do something in a rather complex search, but I believe I can map it down to the following. I would like to use variable expansion or other (preferably simple) magic to recreate this query: index=xyz severity=WARN ("This" OR "That") So something like index=xyz severity=WARN | eval foo="This" | eval bar="That" | search ($foo$ OR $bar$) There is a caveat that much later in the query I'd also need to filter on A_FieldValue="*$foo$*" ( I am aware of the performance penalty of wildcard prefixes)   ----   Possibly presenting a more specific (less contrived, but still contrived) example would help me find alternate answers:   How would one go about crafting a query to find log messages that contain the current e.g. year month day, as of the time of execution of the query? index=xyz severity=WARN | eval mentionsThisMonth=strftime(_time,"%Y.%m") | search "$mentionsThisMonth$* At this point I'm assuming I'll have to regex into a field and then compare the field to the calculated variable. Better (More performant, less memory and CPU hungry) solutions would be most welcome.    Note: I am absolutely NOT interested in how to use date ranges. Which is all you find when you try to google anything to do with 'search' and 'date' as concepts together. I mean literally that there is a date-like thing in the raw log that isn't quite date-like enough to be automatically parsed out into a field.   Thanks much,
Hi Everyone, The Requirement is to send same logs to Multiple indexers. Index name at both the indexes should be different. Scenario is to send Logs from UF to Indx1 (logs indexed in index1) and s... See more...
Hi Everyone, The Requirement is to send same logs to Multiple indexers. Index name at both the indexes should be different. Scenario is to send Logs from UF to Indx1 (logs indexed in index1) and same logs from UF has to be ingested in an intermediate forwarder (HF) and then indexed in an indexer (in an index named index2). I am able to send the logs to both the indexers if the name of the index is same but when I am changing the index name Splunk is not able to send the logs to one of the indexes. Thanks for the help!!
Dear Community, I have 2 question. First one i have index=linux and some computers. I want to track file modifications sudoers and sshd_config file. For example if someone makes a change on sshd_co... See more...
Dear Community, I have 2 question. First one i have index=linux and some computers. I want to track file modifications sudoers and sshd_config file. For example if someone makes a change on sshd_config i want to see this change on Splunk as a alert. I searched on the internet about this and couldn't find. Actually the real thing i want is tracking changing PermitRootLogin (sshd_config) string changes from No to Yes but as i know this is hard to detect in Splunk. Any help would be appreciated!
Hello, I am trying to make a table viz in my absolute dashboard studio dashboard. I have a query that has a field called "Failure_Code" that usually presents the hex values (0x18 or 0x12). When dev... See more...
Hello, I am trying to make a table viz in my absolute dashboard studio dashboard. I have a query that has a field called "Failure_Code" that usually presents the hex values (0x18 or 0x12). When developing the query in the search app, the data gets printed in the table statistics as is (in 0x.... format) . That is how I would like it to be printed out in my dashboard. As soon as I add the query into a table view in a dashboard, it automatically gets converted to int(24, 18...). I am not using any eval, stats or renaming on this field. I tried adding a formatting option on that column, noticed it's automatically set to "number" in the context stanza(formatting function), and modified it to "string" in the JSON source code but it didn't stop the formatting.  I tried the tonumber(myField,"hex") solution but my field was turned to null, and also replacing hex with 16. I tried printf, nothing seems to work. Thank you for the help!
Hi, anyone know how to hide/remove the option label on Single Visualisation mode in Dashboard Studio? I want to hide the search, expand, reload and download option.  Thanks.
hi All, i am using below search to get status if any offline  and i want to create alert if status offline for more than 10 mins . how to modify this search to get if any status is offline more... See more...
hi All, i am using below search to get status if any offline  and i want to create alert if status offline for more than 10 mins . how to modify this search to get if any status is offline more than 10 mins  i am using DB connect to get data for every 5 mins and data will update for every 5 mins in splunk, default is 5 mins to get updated data below is the data for last 5 mins  index=Testindex sourcetype="Bueprism" source=Botstatus | table BOT_Name lastupdated BOT_Status _time | search BOT_Status = Offline   BOT_Name lastupdated BOT_Status HOUVMITBPRSMX20:8001 2023-08-23 05:14:12.503 Offline HOUVMITBPRSMX14:8001 2023-08-23 08:20:11.77 Offline HOUVMITBPRSMX13:8001 2023-08-23 08:20:12.693 Offline
Hi Team, If any new server built then it should be added under monitoring by default ( atleast OS monitoring should be up and running). While building server , automation scripts created to install... See more...
Hi Team, If any new server built then it should be added under monitoring by default ( atleast OS monitoring should be up and running). While building server , automation scripts created to install the latest machine agent as well. Once it reports to AppD controller, we may need to attach the health rules manually. Is it possible to automate the rest of config options once agent is reported to controller? If yes then please share some light on it. Thanks.
index=o365 [ | inputlookup watchlistriskyusers.csv | rename email AS query | fields query ] sourcetype="o365:management:activity" eventtype=o365_authentication | spath | iplocation ClientIP | table U... See more...
index=o365 [ | inputlookup watchlistriskyusers.csv | rename email AS query | fields query ] sourcetype="o365:management:activity" eventtype=o365_authentication | spath | iplocation ClientIP | table UserId ClientIP DisplayName status Country   when i run the above command , i am not able to get the country. country is blank.   | makeresults | eval myip="2001:4860:4860::8888" | iplocation myip however, when i run this, it is able to show me the country. Can you help me to make the above first command work so that country will be shown?    
<label>HELLO WORLD</label> <description>HELLO WORLD is a Dashboard ~~~ </description> For ordinary panels, I managing with CSS in <html> </html> brackets works fine.  However, I just cannot seem t... See more...
<label>HELLO WORLD</label> <description>HELLO WORLD is a Dashboard ~~~ </description> For ordinary panels, I managing with CSS in <html> </html> brackets works fine.  However, I just cannot seem to find out how to change Dashboard Title and Description Configuration. Can anyone help out?   I have no clue