All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

rex has a mode option which can be set to sed to allow for edits to strings rex - Splunk Documentation props.conf has SEDCMD- stanzas which can do the editing before indexing props.conf - Splunk D... See more...
rex has a mode option which can be set to sed to allow for edits to strings rex - Splunk Documentation props.conf has SEDCMD- stanzas which can do the editing before indexing props.conf - Splunk Documentation
Hello I could not find a clear answer.  We have a setup where we run an IIS server on a windows virtual machine. On the IIS server we run a PHP webshop that makes calls to different databases and e... See more...
Hello I could not find a clear answer.  We have a setup where we run an IIS server on a windows virtual machine. On the IIS server we run a PHP webshop that makes calls to different databases and external calls.   Does your Observerability system work out of the box on the PHP webshop, or is this not supported.   The reason for the question is that some monitoring solutions such as AppDynamics, and New Relic does not support that setup. The question is mainly to know if we should start moving the setup to a different tech stack or if can wait a little.
Assuming your ingest has already parsed your timestamp into the _time field, then you can just format that to get the time | eval Time=strftime(_time, "%I:%M %p")
Hello Splunkers!! Below are the sample events I have in which I want to mask UserID field and Password field. There is no selected & interesting field is availble. I want to mask it from the raw eve... See more...
Hello Splunkers!! Below are the sample events I have in which I want to mask UserID field and Password field. There is no selected & interesting field is availble. I want to mask it from the raw event directly. Please suggest me solution from the UI by using rex mode command and second solution  by using the Props & transforms.conf from the backend .   Sample log:   <?xml version="1.0" encoding="UTF-8"?> <HostMessage><![CDATA[<?xml version="1.0" encoding="UTF-8" standalone="no"?><UserMasterRequest><MessageID>25255620</MessageID><MessageCreated>2024-04-05T07:00:55Z</MessageCreated><OpCode>CHANGEPWD</OpCode><UserId>pnkof123</UserId><Password>Summer123</Password><PasswordExpiry>2024-06-09</PasswordExpiry></UserMasterRequest>]]><original_header><IfcLogHostMessage xsi:schemaLocation="http://vanderlande.com/FM/Gtw/GtwLogging/V1/0/0 GtwLogging_V1.0.0.xsd"> <MessageId>25255620</MessageId> <MessageTimeStamp>2024-04-05T05:00:55Z</MessageTimeStamp> <SenderFmInstanceName>CMP_GTW</SenderFmInstanceName> <ReceiverFmInstanceName>FM_BPI</ReceiverFmInstanceName>   </IfcLogHostMessage></original_header></HostMessage>
It looks like you are excluding all the message=SUCCESS events, so you will never see them in the transaction data. If you want to exclude them, you will need to remove that message!="*(SUCCESS)*" co... See more...
It looks like you are excluding all the message=SUCCESS events, so you will never see them in the transaction data. If you want to exclude them, you will need to remove that message!="*(SUCCESS)*" constraint. Then your transaction will have the SUCCESS event included, so at that point, you can then filter out those events that have both succeeded then failed. However, you will need to take care of ordering - you know your data, but can the SUCCESS come AFTER the fail?  
timechart 명령에서 "fixedrange=f" 설정을 사용할 수도 있습니다.  
Hi!   I did uninstall the old version.. Clean up the folders and registry. Disable McAfee Antivirus too. Did the installation again.. and still hit the same issue.. Tried to install back the old ... See more...
Hi!   I did uninstall the old version.. Clean up the folders and registry. Disable McAfee Antivirus too. Did the installation again.. and still hit the same issue.. Tried to install back the old version.. It now encounter the same issue as the new version installation.. My OS is Windows Server 2016 Standard.  
Hello @anandhalagaras1  I believe that while changing the default value directly may not be possible, we can still achieve the desired outcome. Instead of adjusting the default setting, we can creat... See more...
Hello @anandhalagaras1  I believe that while changing the default value directly may not be possible, we can still achieve the desired outcome. Instead of adjusting the default setting, we can create a scheduled search with the preferred value of 20. This means that whenever the search is scheduled to run, it will automatically use the desired setting without needing to be manually adjusted each time. This ensures a consistent experience for users without worrying about the default value being reset. If this reply helps you, Karma would be appreciated.
Hi @whitecat001 , you can use the regex hinted  by @ITWhisperer , but when you ha a pais fieldnem=fieldvalue, as in your case, you could simply use the field for searching your data: index=your_ind... See more...
Hi @whitecat001 , you can use the regex hinted  by @ITWhisperer , but when you ha a pais fieldnem=fieldvalue, as in your case, you could simply use the field for searching your data: index=your_index permission=Permission12345 | ... Ciao. Giuseppe
안녕하세요 릴리, 두 차트는 사용된 시간 범위로 인해 다르게 나타납니다. 첫 번째 차트는 데이터가 있는 경우에만 차트에 선을 표시하는 Splunk 검색에서 가져온 것입니다. 두 번째 차트는 데이터가 없더라도 "전체 기간"에 대한 타임라인을 표시하는 Dashboard Studio의 차트입니다. 차트 2를 차트 1처럼 보이게 하려면 데이터와 일치하도록... See more...
안녕하세요 릴리, 두 차트는 사용된 시간 범위로 인해 다르게 나타납니다. 첫 번째 차트는 데이터가 있는 경우에만 차트에 선을 표시하는 Splunk 검색에서 가져온 것입니다. 두 번째 차트는 데이터가 없더라도 "전체 기간"에 대한 타임라인을 표시하는 Dashboard Studio의 차트입니다. 차트 2를 차트 1처럼 보이게 하려면 데이터와 일치하도록 시간 범위를 설정하면 됩니다.       데이터와 일치하도록 시간 범위를 설정하면:   (저는 한국어를 잘 못해서 번역기를 사용해야 했어요.)
Appreciate if you can share some example .
Hi @theprophet01, If you have a itoa_admin role,  you can export services, entities, glass tables, KPI searches, templates etc from the following menu: Configuration > Backup/Restore Click Create... See more...
Hi @theprophet01, If you have a itoa_admin role,  you can export services, entities, glass tables, KPI searches, templates etc from the following menu: Configuration > Backup/Restore Click Create Job > Create Backup Job   Select Partial Backup, give it a name and description, uncheck include conf files, then click next   Select the services you'd like to backup   Click Save and Backup   You will be taken to the Backup/Restore jobs page, where your job will be queued. When it's finished, usually after a few minutes, you can download the backup as a zip.  Go to the same page on a different Splunk instance to restore it - this time select Restore Job and upload the zip file.   See the capabilities here: https://docs.splunk.com/Documentation/ITSI/4.18.1/Configure/Capabilities You'll need the ones listed under "Backup/Restore" which by default is only given to itoa_admin.
Data Summary is not showing host at all even I already added UDP with ip address on port 514.
Hi Guys, In my scenario i need show error details for correlation id .There are field called tracePoint="EXCEPTION" and message field with PRD(ERROR): In some cases we have exception first after th... See more...
Hi Guys, In my scenario i need show error details for correlation id .There are field called tracePoint="EXCEPTION" and message field with PRD(ERROR): In some cases we have exception first after that the transaction got success.So at that time i want to ignore the transaction in my query.But its not ignoring the success correlationId in my result   index="mulesoft" applicationName="s-concur-api" environment=PRD (tracePoint="EXCEPTION" AND message!="*(SUCCESS)*")|transaction correlationId | rename timestamp as Timestamp correlationId as CorrelationId tracePoint as TracePoint content.ErrorType as Error content.errorType as errorType content.errorMsg as ErrorMsg content.ErrorMsg as errorMsg | eval ErrorType=if(isnull(Error),"Unknown",Error) | dedup CorrelationId |eval errorType=coalesce(Error,errorType)|eval Errormsg=coalesce(ErrorMsg,errorMsg) |table CorrelationId,Timestamp, applicationName, locationInfo.fileName, locationInfo.lineInFile, errorType, message,Errormsg | sort -Timestamp    
The reason why your subsearch is taking a long time is _probably_ due to the volume of hosts, because using a large X=Y OR A=B OR C=D expression in the search can be very slow to get parsed and setup... See more...
The reason why your subsearch is taking a long time is _probably_ due to the volume of hosts, because using a large X=Y OR A=B OR C=D expression in the search can be very slow to get parsed and setup, hence the lookup option can often be the better option. The second way is fundamentally on a different planet to your concept of the previous search. Using join in itself is limited and using join+inputlookup is a completely wrong way to use lookups. The lookup command is designed to enrich data with results from a lookup. If a result cannot be found in the lookup, you will not get results from the lookup and you can validate that state. Have you tried it?
How are you using the token in your search?
I changed ulimits to 64000 ulimit -n 64000 and I realized I had THP still enabled on the CentOS 7 VM it is on so i disabled it and rebooted the VM. vim /etc/default/grub  added transparent_huge... See more...
I changed ulimits to 64000 ulimit -n 64000 and I realized I had THP still enabled on the CentOS 7 VM it is on so i disabled it and rebooted the VM. vim /etc/default/grub  added transparent_hugepage = never echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag I also enabled auto start for splunk. /opt/splunk/bin/splunk enable boot-start -user splunk -systemd-managed 1 I then rebooted. reboot After doing that and the reboot the searches started to work correctly and stopped erroring out. Hopefully this thread can help someone else who has this weird problem!
The installation instructions for this app seem to refer to a "TA_genesys_cloud" app, while this app is named "genesys_cloud_app". However there does not seem to be a TA for genesys cloud in Splunkba... See more...
The installation instructions for this app seem to refer to a "TA_genesys_cloud" app, while this app is named "genesys_cloud_app". However there does not seem to be a TA for genesys cloud in Splunkbase. There are .SPL files in the source github repo at https://github.com/SplunkBAUG/CCA though. Perhaps those are worth looking at. EDIT: Note that you should be cautious of .SPL files that are hosted on third party sites. SPL files that are hosted on SplunkBase go through an inspection process, whereas you're on your own if you install files from third-party sources. I recommend inspecting the contents of the file and determining how it works before installing it in your Splunk environment.
Hi,  I have a simple dropdown with 3 options All, AA and BB. When I select AA/BB I am getting correct results however when I select "All" it says "No search results returned". Not sure where I am d... See more...
Hi,  I have a simple dropdown with 3 options All, AA and BB. When I select AA/BB I am getting correct results however when I select "All" it says "No search results returned". Not sure where I am doing wrong, can anyone help me solving this issue.    "input_iUKfLZBh": { "options": { "items": [ { "label": "AA", "value": "AA" }, { "label": "BB", "value": "BB" }, { "label": "All", "value": "*" } ], "token": "Config_type", "defaultValue": "AA" }, "title": "Select Error Type", "type": "input.dropdown" }  
I tried that one. I have a debian test system, and downloaded the x64 Debian package from https://download.oracle.com/java/17/latest/jdk-17_linux-x64_bin.deb . Used dpkg to install, and it made a dir... See more...
I tried that one. I have a debian test system, and downloaded the x64 Debian package from https://download.oracle.com/java/17/latest/jdk-17_linux-x64_bin.deb . Used dpkg to install, and it made a dir at /usr/lib/jvm/jdk-17-oracle-x64/ . However, providing this path to the DB-connect app still failed to reset the task server. Then I tried using "apt install default-jre". It created the folder "/usr/lib/jvm/java-17-openjdk-amd64" along with links in the "/usr/lib/jvm/" directory. For some reason the splunk DB connect app would not accept "/usr/lib/jvm/java-17-openjdk-amd64" (failed to reset task server), but it did accept "/usr/lib/jvm/java-1.17.0-openjdk-amd64/" and successfully restarted the task server. Unless you have a strong reason to use a specific JDK, I recommend trying different ones until you get one which works.